Compare commits

...

33 Commits
trunk ... 0.8.1

Author SHA1 Message Date
Jakob Homan 7847e9c703 KAFKA-1308; Publish jar of test utilities to Maven. Jun Rao and Jakob Homan; reviewed by Neha Narkhede. 10 years ago
Joe Stein 150d0a70cb bump kafka version to 0.8.1.1 in gradle.properties patch by Joe Stein reviewed by Joel Koshy 11 years ago
Joel Koshy 874620d965 KAFKA-1327; Log cleaner metrics follow-up patch to reset dirtiest log 11 years ago
Joel Koshy 1e9e107ee9 KAFKA-1356; follow-up - return unknown topic partition on non-existent 11 years ago
Jay Kreps 69fbdf9cb3 KAFKA-1327 Add log cleaner metrics. 11 years ago
Jay Kreps 4bcb22f47e KAFKA-1398 Dynamic config follow-on-comments. 11 years ago
Jay Kreps 2ce7ff6b6e KAFKA-1398 dynamic config changes are broken. 11 years ago
Joel Koshy b18d2c379b KAFKA-1355; Avoid sending all topic metadata on state changes. Reviewed 11 years ago
Joel Koshy 7502696e10 KAFKA-1362; Publish sources and javadoc jars; (also removed Scala 2.8.2-specific actions). Reviewed by Jun Rao and Joe Stein 11 years ago
Joel Koshy eaf514b41a KAFKA-1356 (Follow-up) patch to clean up metadata cache api; reviewed by 11 years ago
Guozhang Wang 839f1b1220 KAFKA-1365; Second Manual preferred replica leader election command always fails; reviewed by Joel Koshy. 11 years ago
Timothy Chen 82f4a8e1c0 KAFKA-1356 Topic metadata requests takes too long to process; reviewed by Joel Koshy, Neha Narkhede, Jun Rao and Guozhang Wang 11 years ago
Joel Koshy 3c4ca854fd KAFKA-1323; Fix regression due to KAFKA-1315 (support for relative 11 years ago
Joel Koshy 48f1b74909 KAFKA-1373; Set first dirty (uncompacted) offset to first offset of the 11 years ago
Neha Narkhede 0ffec142a9 KAFKA-1358: Fixing minor log4j statement 11 years ago
Timothy Chen 2b6375b61c KAFKA-1358 Broker throws exception when reconnecting to zookeeper; reviewed by Neha Narkhede 11 years ago
Neha Narkhede dd08538a4f KAFKA-1350 Fix excessive state change logging;reviewed by Jun,Joel,Guozhang and Timothy 11 years ago
Neha Narkhede 5a6a1d83b8 KAFKA-1317 follow up fix 11 years ago
Timothy Chen 39a560789e KAFKA-1317 KafkaServer 0.8.1 not responding to .shutdown() cleanly, possibly related to TopicDeletionManager or MetricsMeter state; reviewed by Neha Narkhede 11 years ago
Jun Rao 655e1a8aa5 kafka-1319; kafka jar doesn't depend on metrics-annotation any more; patched by Jun Rao; reviewed by Neha Narkhede 11 years ago
Timothy Chen c66e408b24 KAFKA-1315 log.dirs property in KafkaServer intolerant of trailing slash; reviewed by Neha Narkhede and Guozhang Wang 11 years ago
Neha Narkhede 03762453fb KAFKA-1311 Add a flag to turn off delete topic until it is stable; reviewed by Joel and Guozhang 11 years ago
Joe Stein 68baaa4160 KAFKA-1288 add enclosing dir in release tar gz patch by Jun Rao, reviewed by Neha Narkhede 11 years ago
Joe Stein 71a6318be6 KAFKA-1289 Misc. nitpicks in log cleaner for new 0.8.1 features patch by Jay Kreps, reviewed by Sriram Subramanian and Jun Rao 11 years ago
Sriram Subramanian b5971264f2 auto rebalance last commit 11 years ago
Jun Rao a2745382de kafka-1271; controller logs exceptions during ZK session expiration; patched by Jun Rao; reviewed by Guozhang Wang and Jay kreps 11 years ago
Joe Stein fbb3525ce8 KAFKA-1254 remove vestigial sbt patch by Joe Stein; reviewed by Jun Rao 11 years ago
Joe Stein 11f3975930 KAFKA-1274 gradle.properties needs the variables used in the build.gradle patch by Joe Stein; Reviewed by Jun Rao 11 years ago
Joe Stein cb23e5d915 KAFKA-1245 the jar files and pom are not being signed so nexus is failing to publish them patch by Joe Stein; Reviewed by Jun Rao 11 years ago
Joe Stein 879e3e770e KAFKA-1263 Snazzy up the README markdown for better visibility on github; patched by Joe Stein; reviewed by Neha Narkhede 11 years ago
Jun Rao 5023c2ff2e kafka-1244,kafka-1246,kafka-1249; various gradle issues for release; patched by Jun Rao; reviewed by Neha Narkhede 11 years ago
Joe Stein 8d623e157d KAFKA-1158 run rat is not needed this is documented now in the release not part of the server running 11 years ago
Neha Narkhede cef51736c7 KAFKA-330 Delete topic followup - more tests and Joel's review comments 11 years ago
  1. 31
      LICENSE
  2. 60
      README-sbt.md
  3. 123
      README.md
  4. 7
      bin/kafka-run-class.sh
  5. 35
      bin/run-rat.sh
  6. 140
      build.gradle
  7. 11
      clients/build.sbt
  8. 2
      config/log4j.properties
  9. 9
      config/server.properties
  10. 1
      contrib/LICENSE
  11. 1
      contrib/NOTICE
  12. 203
      contrib/hadoop-consumer/LICENSE
  13. 1
      contrib/hadoop-consumer/build.sbt
  14. 203
      contrib/hadoop-producer/LICENSE
  15. 1
      contrib/hadoop-producer/build.sbt
  16. 32
      core/build.sbt
  17. 19
      core/src/main/scala/kafka/admin/TopicCommand.scala
  18. 2
      core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala
  19. 8
      core/src/main/scala/kafka/api/TopicMetadata.scala
  20. 4
      core/src/main/scala/kafka/cluster/Partition.scala
  21. 38
      core/src/main/scala/kafka/controller/ControllerChannelManager.scala
  22. 102
      core/src/main/scala/kafka/controller/KafkaController.scala
  23. 46
      core/src/main/scala/kafka/controller/PartitionStateMachine.scala
  24. 64
      core/src/main/scala/kafka/controller/ReplicaStateMachine.scala
  25. 169
      core/src/main/scala/kafka/controller/TopicDeletionManager.scala
  26. 2
      core/src/main/scala/kafka/log/CleanerConfig.scala
  27. 3
      core/src/main/scala/kafka/log/Log.scala
  28. 58
      core/src/main/scala/kafka/log/LogCleaner.scala
  29. 41
      core/src/main/scala/kafka/log/LogCleanerManager.scala
  30. 8
      core/src/main/scala/kafka/log/LogConfig.scala
  31. 4
      core/src/main/scala/kafka/log/LogManager.scala
  32. 9
      core/src/main/scala/kafka/network/RequestChannel.scala
  33. 260
      core/src/main/scala/kafka/server/KafkaApis.scala
  34. 7
      core/src/main/scala/kafka/server/KafkaConfig.scala
  35. 2
      core/src/main/scala/kafka/server/KafkaServer.scala
  36. 2
      core/src/main/scala/kafka/server/OffsetCheckpoint.scala
  37. 6
      core/src/main/scala/kafka/server/ReplicaManager.scala
  38. 52
      core/src/main/scala/kafka/server/TopicConfigManager.scala
  39. 12
      core/src/main/scala/kafka/utils/Throttler.scala
  40. 4
      core/src/main/scala/kafka/utils/VerifiableProperties.scala
  41. 2
      core/src/main/scala/kafka/utils/ZkUtils.scala
  42. 9
      core/src/test/scala/other/kafka/TestLogCleaning.scala
  43. 10
      core/src/test/scala/unit/kafka/admin/AdminTest.scala
  44. 75
      core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala
  45. 2
      core/src/test/scala/unit/kafka/log/CleanerTest.scala
  46. 4
      core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala
  47. 78
      core/src/test/scala/unit/kafka/log/LogManagerTest.scala
  48. 37
      core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala
  49. 51
      core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala
  50. 2
      core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala
  51. 75
      core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala
  52. 20
      core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala
  53. 4
      core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala
  54. 12
      core/src/test/scala/unit/kafka/utils/TestUtils.scala
  55. 3
      examples/build.sbt
  56. 9
      gradle.properties
  57. BIN
      lib/sbt-launch.jar
  58. 1
      perf/build.sbt
  59. 152
      project/Build.scala
  60. 17
      project/build.properties
  61. 251
      project/build/KafkaProject.scala
  62. 9
      project/plugins.sbt
  63. 16
      sbt
  64. 17
      sbt.bat
  65. 5
      scala.gradle
  66. 1
      settings.gradle

31
LICENSE

@ -200,34 +200,3 @@ @@ -200,34 +200,3 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-----------------------------------------------------------------------
SBT LICENSE
Copyright (c) 2008, 2009, 2010 Mark Harrah, Jason Zaugg
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------------------

60
README-sbt.md

@ -1,60 +0,0 @@ @@ -1,60 +0,0 @@
# Apache Kafka #
See our [web site](http://kafka.apache.org) for details on the project.
## Building it ##
1. ./sbt update
2. ./sbt package
3. ./sbt assembly-package-dependency
To build for a particular version of Scala (either 2.8.0, 2.8.2, 2.9.1, 2.9.2 or 2.10.1), change step 2 above to:
2. ./sbt "++2.8.0 package"
To build for all supported versions of Scala, change step 2 above to:
2. ./sbt +package
## Running it ##
Follow instuctions in http://kafka.apache.org/documentation.html#quickstart
## Running unit tests ##
./sbt test
## Building a binary release zip or gzipped tar ball ##
./sbt release-zip
./sbt release-tar
The release file can be found inside ./target/RELEASE/.
## Other Build Tips ##
Here are some useful sbt commands, to be executed at the sbt command prompt (./sbt). Prefixing with "++<version> " runs the
command for a specific Scala version, prefixing with "+" will perform the action for all versions of Scala, and no prefix
runs the command for the default (2.8.0) version of Scala. -
tasks : Lists all the sbt commands and their descriptions
clean : Deletes all generated files (the target directory).
compile : Compile all the sub projects, but not create the jars
test : Run all unit tests in all sub projects
release-zip : Create all the jars, run unit tests and create a deployable release zip
release-tar : Create all the jars, run unit tests and create a deployable release gzipped tar tall
package: Creates jars for src, test, docs etc
projects : List all the sub projects
project sub_project_name : Switch to a particular sub-project. For example, to switch to the core kafka code, use "project core-kafka"
The following commands can be run only on a particular sub project -
test-only package.test.TestName : Runs only the specified test in the current sub project
run : Provides options to run any of the classes that have a main method. For example, you can switch to project java-examples, and run the examples there by executing "project java-examples" followed by "run"
For more details please see the [SBT documentation](https://github.com/harrah/xsbt/wiki)
## Contribution ##
Kafka is a new project, and we are interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html).
To contribute follow the instructions here:
* http://kafka.apache.org/contributing.html
We also welcome patches for the website and documentation which can be found here:
* https://svn.apache.org/repos/asf/kafka/site

123
README.md

@ -1,81 +1,88 @@ @@ -1,81 +1,88 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Apache Kafka #
Apache Kafka
=================
See our [web site](http://kafka.apache.org) for details on the project.
## Building a jar and running it ##
1. ./gradlew copyDependantLibs
2. ./gradlew jar
3. Follow instuctions in http://kafka.apache.org/documentation.html#quickstart
### Building a jar and running it ###
./gradlew jar
## Running unit tests ##
./gradlew test
Follow instuctions in http://kafka.apache.org/documentation.html#quickstart
## Forcing re-running unit tests w/o code change ##
./gradlew cleanTest test
### Building source jar ###
./gradlew srcJar
## Running a particular unit test ##
./gradlew -Dtest.single=RequestResponseSerializationTest core:test
### Building javadocs and scaladocs ###
./gradlew javadoc
./gradlew javadocJar # builds a jar from the javadocs
./gradlew scaladoc
./gradlew scaladocJar # builds a jar from the scaladocs
./gradlew docsJar # builds both javadoc and scaladoc jar
### Running unit tests ###
./gradlew test
### Forcing re-running unit tests w/o code change ###
./gradlew cleanTest test
### Running a particular unit test ###
./gradlew -Dtest.single=RequestResponseSerializationTest core:test
### Building a binary release gzipped tar ball ###
./gradlew clean
./gradlew releaseTarGz
## Building a binary release gzipped tar ball ##
./gradlew clean
./gradlew releaseTarGz
The release file can be found inside ./core/build/distributions/.
## Cleaning the build ##
./gradlew clean
### Cleaning the build ###
./gradlew clean
### Running a task on a particular version of Scala ####
either 2.8.0, 2.8.2, 2.9.1, 2.9.2 or 2.10.1) (If building a jar with a version other than 2.8.0, the scala version variable in bin/kafka-run-class.sh needs to be changed to run quick start.)
./gradlew -PscalaVersion=2.9.1 jar
./gradlew -PscalaVersion=2.9.1 test
./gradlew -PscalaVersion=2.9.1 releaseTarGz
### Running a task for a specific project ###
This is for 'core', 'perf', 'contrib:hadoop-consumer', 'contrib:hadoop-producer', 'examples' and 'clients'
./gradlew core:jar
./gradlew core:test
## Running a task on a particular version of Scala (either 2.8.0, 2.8.2, 2.9.1, 2.9.2 or 2.10.1) ##
## (If building a jar with a version other than 2.8.0, the scala version variable in bin/kafka-run-class.sh needs to be changed to run quick start.) ##
./gradlew -PscalaVersion=2.9.1 jar
./gradlew -PscalaVersion=2.9.1 test
./gradlew -PscalaVersion=2.9.1 releaseTarGz
### Listing all gradle tasks ###
./gradlew tasks
## Running a task for a specific project in 'core', 'perf', 'contrib:hadoop-consumer', 'contrib:hadoop-producer', 'examples', 'clients' ##
./gradlew core:jar
./gradlew core:test
### Building IDE project ####
./gradlew eclipse
./gradlew idea
## Listing all gradle tasks ##
./gradlew tasks
### Building the jar for all scala versions and for all projects ###
./gradlew jarAll
# Building IDE project ##
./gradlew eclipse
./gradlew idea
### Running unit tests for all scala versions and for all projects ###
./gradlew testAll
# Building the jar for all scala versions and for all projects ##
./gradlew jarAll
### Building a binary release gzipped tar ball for all scala versions ###
./gradlew releaseTarGzAll
## Running unit tests for all scala versions and for all projects ##
./gradlew testAll
### Publishing the jar for all version of Scala and for all projects to maven ###
./gradlew uploadArchivesAll
## Building a binary release gzipped tar ball for all scala versions ##
./gradlew releaseTarGzAll
Please note for this to work you should create/update `~/.gradle/gradle.properties` and assign the following variables
## Publishing the jar for all version of Scala and for all projects to maven (To test locally, change mavenUrl in gradle.properties to a local dir.) ##
./gradlew uploadArchivesAll
mavenUrl=
mavenUsername=
mavenPassword=
signing.keyId=
signing.password=
signing.secretKeyRingFile=
## Building the test jar ##
./gradlew testJar
### Building the test jar ###
./gradlew testJar
## Determining how transitive dependencies are added ##
./gradlew core:dependencies --configuration runtime
### Determining how transitive dependencies are added ###
./gradlew core:dependencies --configuration runtime
## Contribution ##
### Contribution ###
Kafka is a new project, and we are interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html).
Apache Kafka interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html).
To contribute follow the instructions here:
* http://kafka.apache.org/contributing.html

7
bin/kafka-run-class.sh

@ -32,13 +32,6 @@ if [ -z "$SCALA_VERSION" ]; then @@ -32,13 +32,6 @@ if [ -z "$SCALA_VERSION" ]; then
SCALA_VERSION=2.8.0
fi
# TODO: remove when removing sbt
# assume all dependencies have been packaged into one jar with sbt-assembly's task "assembly-package-dependency"
for file in $base_dir/core/target/scala-${SCALA_VERSION}/*.jar;
do
CLASSPATH=$CLASSPATH:$file
done
# run ./gradlew copyDependantLibs to get all dependant jars in a local dir
for file in $base_dir/core/build/dependant-libs-${SCALA_VERSION}/*.jar;
do

35
bin/run-rat.sh

@ -1,35 +0,0 @@ @@ -1,35 +0,0 @@
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
base_dir=$(dirname $0)/..
rat_excludes_file=$base_dir/.rat-excludes
if [ -z "$JAVA_HOME" ]; then
JAVA="java"
else
JAVA="$JAVA_HOME/bin/java"
fi
rat_command="$JAVA -jar $base_dir/lib/apache-rat-0.8.jar --dir $base_dir "
for f in $(cat $rat_excludes_file);
do
rat_command="${rat_command} -e $f"
done
echo "Running " $rat_command
$rat_command > $base_dir/rat.out

140
build.gradle

@ -28,30 +28,37 @@ allprojects { @@ -28,30 +28,37 @@ allprojects {
}
apply from: file('gradle/license.gradle')
apply from: file('scala.gradle')
subprojects {
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'maven'
apply plugin: 'signing'
uploadArchives {
repositories {
// To test locally, replace mavenUrl in gradle.properties to file://localhost/tmp/myRepo/
mavenDeployer {
repository(url: "${mavenUrl}") {
authentication(userName: "${mavenUsername}", password: "${mavenPassword}")
}
afterEvaluate {
pom.artifactId = "${archivesBaseName}"
pom.project {
name 'Apache Kafka'
packaging 'jar'
url 'http://kafka.apache.org'
licenses {
license {
name 'The Apache Software License, Version 2.0'
url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
distribution 'repo'
signing {
sign configurations.archives
// To test locally, replace mavenUrl in ~/.gradle/gradle.properties to file://localhost/tmp/myRepo/
mavenDeployer {
beforeDeployment { MavenDeployment deployment -> signing.signPom(deployment) }
repository(url: "${mavenUrl}") {
authentication(userName: "${mavenUsername}", password: "${mavenPassword}")
}
afterEvaluate {
pom.artifactId = "${archivesBaseName}"
pom.project {
name 'Apache Kafka'
packaging 'jar'
url 'http://kafka.apache.org'
licenses {
license {
name 'The Apache Software License, Version 2.0'
url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
distribution 'repo'
}
}
}
}
@ -60,16 +67,68 @@ subprojects { @@ -60,16 +67,68 @@ subprojects {
}
}
jar {
from '../LICENSE'
from '../NOTICE'
}
tasks.withType(Javadoc) {
task srcJar(type:Jar) {
classifier = 'sources'
from '../LICENSE'
from '../NOTICE'
from sourceSets.main.java
}
task javadocJar(type: Jar, dependsOn: javadoc) {
classifier 'javadoc'
from '../LICENSE'
from '../NOTICE'
from javadoc.destinationDir
}
task docsJar(type: Jar, dependsOn: javadocJar) { }
artifacts {
archives srcJar
archives javadocJar
}
}
tasks.withType(ScalaCompile) {
task srcJar(type:Jar, overwrite: true) {
classifier = 'sources'
from '../LICENSE'
from '../NOTICE'
from sourceSets.main.scala
from sourceSets.main.java
}
scalaCompileOptions.useAnt = false
configure(scalaCompileOptions.forkOptions) {
memoryMaximumSize = '1g'
jvmArgs = ['-XX:MaxPermSize=512m']
}
}
tasks.withType(ScalaDoc) {
task scaladocJar(type:Jar) {
classifier = 'scaladoc'
from '../LICENSE'
from '../NOTICE'
from scaladoc
}
task docsJar(type: Jar, dependsOn: ['javadocJar', 'scaladocJar'], overwrite: true) { }
artifacts {
archives scaladocJar
}
}
}
for ( sv in ['2_8_0', '2_8_2', '2_9_1', '2_9_2', '2_10_1'] ) {
for ( sv in ['2_8_0', '2_9_1', '2_9_2', '2_10_1'] ) {
String svInDot = sv.replaceAll( "_", ".")
tasks.create(name: "jar_core_${sv}", type: GradleBuild) {
@ -84,6 +143,18 @@ for ( sv in ['2_8_0', '2_8_2', '2_9_1', '2_9_2', '2_10_1'] ) { @@ -84,6 +143,18 @@ for ( sv in ['2_8_0', '2_8_2', '2_9_1', '2_9_2', '2_10_1'] ) {
startParameter.projectProperties = [scalaVersion: "${svInDot}"]
}
tasks.create(name: "srcJar_${sv}", type: GradleBuild) {
buildFile = './build.gradle'
tasks = ['core:srcJar']
startParameter.projectProperties = [scalaVersion: "${svInDot}"]
}
tasks.create(name: "docsJar_${sv}", type: GradleBuild) {
buildFile = './build.gradle'
tasks = ['core:docsJar']
startParameter.projectProperties = [scalaVersion: "${svInDot}"]
}
tasks.create(name: "releaseTarGz_${sv}", type: GradleBuild) {
buildFile = './build.gradle'
tasks = ['releaseTarGz']
@ -97,23 +168,27 @@ for ( sv in ['2_8_0', '2_8_2', '2_9_1', '2_9_2', '2_10_1'] ) { @@ -97,23 +168,27 @@ for ( sv in ['2_8_0', '2_8_2', '2_9_1', '2_9_2', '2_10_1'] ) {
}
}
tasks.create(name: "jarAll", dependsOn: ['jar_core_2_8_0', 'jar_core_2_8_2', 'jar_core_2_9_1', 'jar_core_2_9_2', 'jar_core_2_10_1', 'clients:jar', 'perf:jar', 'examples:jar', 'contrib:hadoop-consumer:jar', 'contrib:hadoop-producer:jar']) {
tasks.create(name: "jarAll", dependsOn: ['jar_core_2_8_0', 'jar_core_2_9_1', 'jar_core_2_9_2', 'jar_core_2_10_1', 'clients:jar', 'perf:jar', 'examples:jar', 'contrib:hadoop-consumer:jar', 'contrib:hadoop-producer:jar']) {
}
tasks.create(name: "testAll", dependsOn: ['test_core_2_8_0', 'test_core_2_8_2', 'test_core_2_9_1', 'test_core_2_9_2', 'test_core_2_10_1', 'clients:test']) {
tasks.create(name: "srcJarAll", dependsOn: ['srcJar_2_8_0', 'srcJar_2_9_1', 'srcJar_2_9_2', 'srcJar_2_10_1', 'clients:srcJar', 'perf:srcJar', 'examples:srcJar', 'contrib:hadoop-consumer:srcJar', 'contrib:hadoop-producer:srcJar']) { }
tasks.create(name: "docsJarAll", dependsOn: ['docsJar_2_8_0', 'docsJar_2_9_1', 'docsJar_2_9_2', 'docsJar_2_10_1', 'clients:docsJar', 'perf:docsJar', 'examples:docsJar', 'contrib:hadoop-consumer:docsJar', 'contrib:hadoop-producer:docsJar']) { }
tasks.create(name: "testAll", dependsOn: ['test_core_2_8_0', 'test_core_2_9_1', 'test_core_2_9_2', 'test_core_2_10_1', 'clients:test']) {
}
tasks.create(name: "releaseTarGzAll", dependsOn: ['releaseTarGz_2_8_0', 'releaseTarGz_2_8_2', 'releaseTarGz_2_9_1', 'releaseTarGz_2_9_2', 'releaseTarGz_2_10_1']) {
tasks.create(name: "releaseTarGzAll", dependsOn: ['releaseTarGz_2_8_0', 'releaseTarGz_2_9_1', 'releaseTarGz_2_9_2', 'releaseTarGz_2_10_1']) {
}
tasks.create(name: "uploadArchivesAll", dependsOn: ['uploadCoreArchives_2_8_0', 'uploadCoreArchives_2_8_2', 'uploadCoreArchives_2_9_1', 'uploadCoreArchives_2_9_2', 'uploadCoreArchives_2_10_1', 'perf:uploadArchives', 'examples:uploadArchives', 'contrib:hadoop-consumer:uploadArchives', 'contrib:hadoop-producer:uploadArchives']) {
tasks.create(name: "uploadArchivesAll", dependsOn: ['uploadCoreArchives_2_8_0', 'uploadCoreArchives_2_9_1', 'uploadCoreArchives_2_9_2', 'uploadCoreArchives_2_10_1', 'perf:uploadArchives', 'examples:uploadArchives', 'contrib:hadoop-consumer:uploadArchives', 'contrib:hadoop-producer:uploadArchives']) {
}
project(':core') {
println "Building project 'core' with Scala version $scalaVersion"
apply plugin: 'scala'
archivesBaseName = "kafka_${scalaVersion}"
archivesBaseName = "kafka_${baseScalaVersion}"
def (major, minor, trivial) = scalaVersion.tokenize('.')
@ -140,7 +215,6 @@ project(':core') { @@ -140,7 +215,6 @@ project(':core') {
compile 'org.apache.zookeeper:zookeeper:3.3.4'
compile 'com.101tec:zkclient:0.3'
compile 'com.yammer.metrics:metrics-core:2.2.0'
compile 'com.yammer.metrics:metrics-annotation:2.2.0'
compile 'net.sf.jopt-simple:jopt-simple:3.2'
compile 'org.xerial.snappy:snappy-java:1.0.5'
@ -174,9 +248,8 @@ project(':core') { @@ -174,9 +248,8 @@ project(':core') {
}
tasks.create(name: "releaseTarGz", dependsOn: configurations.archives.artifacts, type: Tar) {
into "."
into "kafka_${baseScalaVersion}-${version}"
compression = Compression.GZIP
classifier = 'dist'
from(project.file("../bin")) { into "bin/" }
from(project.file("../config")) { into "config/" }
from '../LICENSE'
@ -185,8 +258,12 @@ project(':core') { @@ -185,8 +258,12 @@ project(':core') {
from(configurations.archives.artifacts.files) { into("libs/") }
}
jar {
dependsOn 'copyDependantLibs'
}
task testJar(type: Jar) {
appendix = 'test'
classifier = 'test'
from sourceSets.test.output
}
@ -196,13 +273,17 @@ project(':core') { @@ -196,13 +273,17 @@ project(':core') {
exceptionFormat = 'full'
}
}
artifacts {
archives testJar
}
}
project(':perf') {
println "Building project 'perf' with Scala version $scalaVersion"
apply plugin: 'scala'
archivesBaseName = "kafka-perf_${scalaVersion}"
archivesBaseName = "kafka-perf_${baseScalaVersion}"
dependencies {
compile project(':core')
@ -269,6 +350,7 @@ project(':examples') { @@ -269,6 +350,7 @@ project(':examples') {
dependencies {
compile project(':core')
}
}
project(':clients') {
@ -279,7 +361,7 @@ project(':clients') { @@ -279,7 +361,7 @@ project(':clients') {
}
task testJar(type: Jar) {
appendix = 'test'
classifier = 'test'
from sourceSets.test.output
}

11
clients/build.sbt

@ -1,11 +0,0 @@ @@ -1,11 +0,0 @@
import sbt._
import Keys._
import AssemblyKeys._
name := "clients"
libraryDependencies ++= Seq(
"com.novocode" % "junit-interface" % "0.9" % "test"
)
assemblySettings

2
config/log4j.properties

@ -73,8 +73,6 @@ log4j.additivity.kafka.controller=false @@ -73,8 +73,6 @@ log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.kafka.log.Cleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.Cleaner=false
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false

9
config/server.properties

@ -40,7 +40,7 @@ port=9092 @@ -40,7 +40,7 @@ port=9092
num.network.threads=2
# The number of threads doing disk I/O
num.io.threads=2
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=1048576
@ -100,6 +100,10 @@ log.segment.bytes=536870912 @@ -100,6 +100,10 @@ log.segment.bytes=536870912
# to the retention policies
log.retention.check.interval.ms=60000
# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
@ -111,6 +115,3 @@ zookeeper.connect=localhost:2181 @@ -111,6 +115,3 @@ zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=1000000
log.cleanup.policy=delete

1
contrib/LICENSE

@ -0,0 +1 @@ @@ -0,0 +1 @@
../LICENSE

1
contrib/NOTICE

@ -0,0 +1 @@ @@ -0,0 +1 @@
../NOTICE

203
contrib/hadoop-consumer/LICENSE

@ -1,203 +0,0 @@ @@ -1,203 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1
contrib/hadoop-consumer/build.sbt

@ -1 +0,0 @@ @@ -1 +0,0 @@
crossPaths := false

203
contrib/hadoop-producer/LICENSE

@ -1,203 +0,0 @@ @@ -1,203 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1
contrib/hadoop-producer/build.sbt

@ -1 +0,0 @@ @@ -1 +0,0 @@
crossPaths := false

32
core/build.sbt

@ -1,32 +0,0 @@ @@ -1,32 +0,0 @@
import sbt._
import Keys._
import AssemblyKeys._
name := "kafka"
resolvers ++= Seq(
"SonaType ScalaTest repo" at "https://oss.sonatype.org/content/groups/public/org/scalatest/"
)
libraryDependencies <+= scalaVersion("org.scala-lang" % "scala-compiler" % _ )
libraryDependencies ++= Seq(
"org.apache.zookeeper" % "zookeeper" % "3.3.4",
"com.101tec" % "zkclient" % "0.3",
"org.xerial.snappy" % "snappy-java" % "1.0.5",
"com.yammer.metrics" % "metrics-core" % "2.2.0",
"com.yammer.metrics" % "metrics-annotation" % "2.2.0",
"org.easymock" % "easymock" % "3.0" % "test",
"junit" % "junit" % "4.1" % "test"
)
libraryDependencies <<= (scalaVersion, libraryDependencies) { (sv, deps) =>
deps :+ (sv match {
case "2.8.0" => "org.scalatest" % "scalatest" % "1.2" % "test"
case v if v.startsWith("2.10") => "org.scalatest" %% "scalatest" % "1.9.1" % "test"
case _ => "org.scalatest" %% "scalatest" % "1.8" % "test"
})
}
assemblySettings

19
core/src/main/scala/kafka/admin/TopicCommand.scala

@ -34,9 +34,9 @@ object TopicCommand { @@ -34,9 +34,9 @@ object TopicCommand {
val opts = new TopicCommandOptions(args)
// should have exactly one action
val actions = Seq(opts.createOpt, opts.deleteOpt, opts.listOpt, opts.alterOpt, opts.describeOpt).count(opts.options.has _)
val actions = Seq(opts.createOpt, opts.listOpt, opts.alterOpt, opts.describeOpt).count(opts.options.has _)
if(actions != 1) {
System.err.println("Command must include exactly one action: --list, --describe, --create, --delete, or --alter")
System.err.println("Command must include exactly one action: --list, --describe, --create or --alter")
opts.parser.printHelpOn(System.err)
System.exit(1)
}
@ -50,8 +50,6 @@ object TopicCommand { @@ -50,8 +50,6 @@ object TopicCommand {
createTopic(zkClient, opts)
else if(opts.options.has(opts.alterOpt))
alterTopic(zkClient, opts)
else if(opts.options.has(opts.deleteOpt))
deleteTopic(zkClient, opts)
else if(opts.options.has(opts.listOpt))
listTopics(zkClient, opts)
else if(opts.options.has(opts.describeOpt))
@ -114,14 +112,6 @@ object TopicCommand { @@ -114,14 +112,6 @@ object TopicCommand {
}
}
def deleteTopic(zkClient: ZkClient, opts: TopicCommandOptions) {
val topics = getTopics(zkClient, opts)
topics.foreach { topic =>
AdminUtils.deleteTopic(zkClient, topic)
println("Topic \"%s\" queued for deletion.".format(topic))
}
}
def listTopics(zkClient: ZkClient, opts: TopicCommandOptions) {
val topics = getTopics(zkClient, opts)
for(topic <- topics)
@ -216,10 +206,9 @@ object TopicCommand { @@ -216,10 +206,9 @@ object TopicCommand {
val listOpt = parser.accepts("list", "List all available topics.")
val createOpt = parser.accepts("create", "Create a new topic.")
val alterOpt = parser.accepts("alter", "Alter the configuration for the topic.")
val deleteOpt = parser.accepts("delete", "Delete the topic.")
val describeOpt = parser.accepts("describe", "List details for the given topics.")
val helpOpt = parser.accepts("help", "Print usage information.")
val topicOpt = parser.accepts("topic", "The topic to be create, alter, delete, or describe. Can also accept a regular " +
val topicOpt = parser.accepts("topic", "The topic to be create, alter or describe. Can also accept a regular " +
"expression except for --create option")
.withRequiredArg
.describedAs("topic")
@ -255,7 +244,7 @@ object TopicCommand { @@ -255,7 +244,7 @@ object TopicCommand {
val options = parser.parse(args : _*)
val allTopicLevelOpts: Set[OptionSpec[_]] = Set(alterOpt, createOpt, deleteOpt, describeOpt, listOpt)
val allTopicLevelOpts: Set[OptionSpec[_]] = Set(alterOpt, createOpt, describeOpt, listOpt)
def checkArgs() {
// check required args

2
core/src/main/scala/kafka/api/LeaderAndIsrRequest.scala

@ -32,6 +32,8 @@ import collection.Set @@ -32,6 +32,8 @@ import collection.Set
object LeaderAndIsr {
val initialLeaderEpoch: Int = 0
val initialZKVersion: Int = 0
val NoLeader = -1
val LeaderDuringDelete = -2
}
case class LeaderAndIsr(var leader: Int, var leaderEpoch: Int, var isr: List[Int], var zkVersion: Int) {

8
core/src/main/scala/kafka/api/TopicMetadata.scala

@ -32,9 +32,11 @@ object TopicMetadata { @@ -32,9 +32,11 @@ object TopicMetadata {
val errorCode = readShortInRange(buffer, "error code", (-1, Short.MaxValue))
val topic = readShortString(buffer)
val numPartitions = readIntInRange(buffer, "number of partitions", (0, Int.MaxValue))
val partitionsMetadata = new ArrayBuffer[PartitionMetadata]()
for(i <- 0 until numPartitions)
partitionsMetadata += PartitionMetadata.readFrom(buffer, brokers)
val partitionsMetadata: Array[PartitionMetadata] = new Array[PartitionMetadata](numPartitions)
for(i <- 0 until numPartitions) {
val partitionMetadata = PartitionMetadata.readFrom(buffer, brokers)
partitionsMetadata(partitionMetadata.partitionId) = partitionMetadata
}
new TopicMetadata(topic, partitionsMetadata, errorCode)
}
}

4
core/src/main/scala/kafka/cluster/Partition.scala

@ -56,7 +56,7 @@ class Partition(val topic: String, @@ -56,7 +56,7 @@ class Partition(val topic: String,
* each partition. */
private var controllerEpoch: Int = KafkaController.InitialControllerEpoch - 1
this.logIdent = "Partition [%s,%d] on broker %d: ".format(topic, partitionId, localBrokerId)
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
private def isReplicaLocal(replicaId: Int) : Boolean = (replicaId == localBrokerId)
@ -88,7 +88,7 @@ class Partition(val topic: String, @@ -88,7 +88,7 @@ class Partition(val topic: String,
if (isReplicaLocal(replicaId)) {
val config = LogConfig.fromProps(logManager.defaultConfig.toProps, AdminUtils.fetchTopicConfig(zkClient, topic))
val log = logManager.createLog(TopicAndPartition(topic, partitionId), config)
val checkpoint = replicaManager.highWatermarkCheckpoints(log.dir.getParent)
val checkpoint = replicaManager.highWatermarkCheckpoints(log.dir.getParentFile.getAbsolutePath)
val offsetMap = checkpoint.read
if (!offsetMap.contains(TopicAndPartition(topic, partitionId)))
warn("No checkpointed highwatermark is found for partition [%s,%d]".format(topic, partitionId))

38
core/src/main/scala/kafka/controller/ControllerChannelManager.scala

@ -114,7 +114,7 @@ class RequestSendThread(val controllerId: Int, @@ -114,7 +114,7 @@ class RequestSendThread(val controllerId: Int,
val channel: BlockingChannel)
extends ShutdownableThread("Controller-%d-to-broker-%d-send-thread".format(controllerId, toBroker.id)) {
private val lock = new Object()
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
connectToBroker(toBroker, channel)
override def doWork(): Unit = {
@ -188,7 +188,7 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging @@ -188,7 +188,7 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging
val leaderAndIsrRequestMap = new mutable.HashMap[Int, mutable.HashMap[(String, Int), PartitionStateInfo]]
val stopReplicaRequestMap = new mutable.HashMap[Int, Seq[StopReplicaRequestInfo]]
val updateMetadataRequestMap = new mutable.HashMap[Int, mutable.HashMap[TopicAndPartition, PartitionStateInfo]]
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
def newBatch() {
// raise error if the previous batch is not empty
@ -211,7 +211,8 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging @@ -211,7 +211,8 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging
leaderAndIsrRequestMap(brokerId).put((topic, partition),
PartitionStateInfo(leaderIsrAndControllerEpoch, replicas.toSet))
}
addUpdateMetadataRequestForBrokers(controllerContext.liveOrShuttingDownBrokerIds.toSeq)
addUpdateMetadataRequestForBrokers(controllerContext.liveOrShuttingDownBrokerIds.toSeq,
Set(TopicAndPartition(topic, partition)))
}
def addStopReplicaRequestForBrokers(brokerIds: Seq[Int], topic: String, partition: Int, deletePartition: Boolean,
@ -232,23 +233,40 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging @@ -232,23 +233,40 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging
*
*/
def addUpdateMetadataRequestForBrokers(brokerIds: Seq[Int],
partitions: collection.Set[TopicAndPartition] = Set.empty[TopicAndPartition],
callback: (RequestOrResponse) => Unit = null) {
val partitionList = controllerContext.partitionLeadershipInfo.keySet.dropWhile(
p => controller.deleteTopicManager.isTopicQueuedUpForDeletion(p.topic))
partitionList.foreach { partition =>
def updateMetadataRequestMapFor(partition: TopicAndPartition, beingDeleted: Boolean) {
val leaderIsrAndControllerEpochOpt = controllerContext.partitionLeadershipInfo.get(partition)
leaderIsrAndControllerEpochOpt match {
case Some(leaderIsrAndControllerEpoch) =>
val replicas = controllerContext.partitionReplicaAssignment(partition).toSet
val partitionStateInfo = PartitionStateInfo(leaderIsrAndControllerEpoch, replicas)
val partitionStateInfo = if (beingDeleted) {
val leaderAndIsr = new LeaderAndIsr(LeaderAndIsr.LeaderDuringDelete, leaderIsrAndControllerEpoch.leaderAndIsr.isr)
PartitionStateInfo(LeaderIsrAndControllerEpoch(leaderAndIsr, leaderIsrAndControllerEpoch.controllerEpoch), replicas)
} else {
PartitionStateInfo(leaderIsrAndControllerEpoch, replicas)
}
brokerIds.filter(b => b >= 0).foreach { brokerId =>
updateMetadataRequestMap.getOrElseUpdate(brokerId, new mutable.HashMap[TopicAndPartition, PartitionStateInfo])
updateMetadataRequestMap(brokerId).put(partition, partitionStateInfo)
}
case None =>
info("Leader not assigned yet for partition %s. Skip sending udpate metadata request".format(partition))
info("Leader not yet assigned for partition %s. Skip sending UpdateMetadataRequest.".format(partition))
}
}
val filteredPartitions = {
val givenPartitions = if (partitions.isEmpty)
controllerContext.partitionLeadershipInfo.keySet
else
partitions
if (controller.deleteTopicManager.partitionsToBeDeleted.isEmpty)
givenPartitions
else
givenPartitions -- controller.deleteTopicManager.partitionsToBeDeleted
}
filteredPartitions.foreach(partition => updateMetadataRequestMapFor(partition, beingDeleted = false))
controller.deleteTopicManager.partitionsToBeDeleted.foreach(partition => updateMetadataRequestMapFor(partition, beingDeleted = true))
}
def sendRequestsToBrokers(controllerEpoch: Int, correlationId: Int) {
@ -272,10 +290,10 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging @@ -272,10 +290,10 @@ class ControllerBrokerRequestBatch(controller: KafkaController) extends Logging
val broker = m._1
val partitionStateInfos = m._2.toMap
val updateMetadataRequest = new UpdateMetadataRequest(controllerId, controllerEpoch, correlationId, clientId,
partitionStateInfos, controllerContext.liveOrShuttingDownBrokers)
partitionStateInfos, controllerContext.liveOrShuttingDownBrokers)
partitionStateInfos.foreach(p => stateChangeLogger.trace(("Controller %d epoch %d sending UpdateMetadata request %s with " +
"correlationId %d to broker %d for partition %s").format(controllerId, controllerEpoch, p._2.leaderIsrAndControllerEpoch,
correlationId, broker, p._1)))
correlationId, broker, p._1)))
controller.sendRequest(broker, updateMetadataRequest, null)
}
updateMetadataRequestMap.clear()

102
core/src/main/scala/kafka/controller/KafkaController.scala

@ -34,10 +34,9 @@ import org.apache.zookeeper.Watcher.Event.KeeperState @@ -34,10 +34,9 @@ import org.apache.zookeeper.Watcher.Event.KeeperState
import org.I0Itec.zkclient.{IZkDataListener, IZkStateListener, ZkClient}
import org.I0Itec.zkclient.exception.{ZkNodeExistsException, ZkNoNodeException}
import java.util.concurrent.atomic.AtomicInteger
import org.apache.log4j.Logger
import java.util.concurrent.locks.ReentrantLock
import scala.Some
import kafka.common.TopicAndPartition
import java.util.concurrent.locks.ReentrantLock
class ControllerContext(val zkClient: ZkClient,
val zkSessionTimeout: Int) {
@ -125,10 +124,12 @@ trait KafkaControllerMBean { @@ -125,10 +124,12 @@ trait KafkaControllerMBean {
object KafkaController extends Logging {
val MBeanName = "kafka.controller:type=KafkaController,name=ControllerOps"
val stateChangeLogger = "state.change.logger"
val stateChangeLogger = new StateChangeLogger("state.change.logger")
val InitialControllerEpoch = 1
val InitialControllerEpochZkVersion = 1
case class StateChangeLogger(override val loggerName: String) extends Logging
def parseControllerId(controllerInfoString: String): Int = {
try {
Json.parseFull(controllerInfoString) match {
@ -154,7 +155,7 @@ object KafkaController extends Logging { @@ -154,7 +155,7 @@ object KafkaController extends Logging {
class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logging with KafkaMetricsGroup with KafkaControllerMBean {
this.logIdent = "[Controller " + config.brokerId + "]: "
private var isRunning = true
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
val controllerContext = new ControllerContext(zkClient, config.zkSessionTimeoutMs)
val partitionStateMachine = new PartitionStateMachine(this)
val replicaStateMachine = new ReplicaStateMachine(this)
@ -335,14 +336,21 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -335,14 +336,21 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
*/
def onControllerResignation() {
inLock(controllerContext.controllerLock) {
autoRebalanceScheduler.shutdown()
deleteTopicManager.shutdown()
Utils.unregisterMBean(KafkaController.MBeanName)
if(deleteTopicManager != null)
deleteTopicManager.shutdown()
partitionStateMachine.shutdown()
replicaStateMachine.shutdown()
if(config.autoLeaderRebalanceEnable)
autoRebalanceScheduler.shutdown()
if(controllerContext.controllerChannelManager != null) {
controllerContext.controllerChannelManager.shutdown()
controllerContext.controllerChannelManager = null
info("Controller shutdown complete")
}
}
}
@ -433,7 +441,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -433,7 +441,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
if(replicasForTopicsToBeDeleted.size > 0) {
// it is required to mark the respective replicas in TopicDeletionFailed state since the replica cannot be
// deleted when the broker is down. This will prevent the replica from being in TopicDeletionStarted state indefinitely
// since topic deletion cannot be retried if at least one replica is in TopicDeletionStarted state
// since topic deletion cannot be retried until at least one replica is in TopicDeletionStarted state
deleteTopicManager.failReplicaDeletion(replicasForTopicsToBeDeleted)
}
}
@ -443,6 +451,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -443,6 +451,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
* and partitions as input. It does the following -
* 1. Registers partition change listener. This is not required until KAFKA-347
* 2. Invokes the new partition callback
* 3. Send metadata request with the new topic to all brokers so they allow requests for that topic to be served
*/
def onNewTopicCreation(topics: Set[String], newPartitions: Set[TopicAndPartition]) {
info("New topic creation callback for %s".format(newPartitions.mkString(",")))
@ -545,7 +554,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -545,7 +554,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
info("Removed partition %s from the list of reassigned partitions in zookeeper".format(topicAndPartition))
controllerContext.partitionsBeingReassigned.remove(topicAndPartition)
//12. After electing leader, the replicas and isr information changes, so resend the update metadata request to every broker
sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)
sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, Set(topicAndPartition))
// signal delete topic thread if reassignment for some partitions belonging to topics being deleted just completed
deleteTopicManager.resumeDeletionForTopics(Set(topicAndPartition.topic))
}
@ -581,8 +590,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -581,8 +590,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
// first register ISR change listener
watchIsrChangesForReassignedPartition(topic, partition, reassignedPartitionContext)
controllerContext.partitionsBeingReassigned.put(topicAndPartition, reassignedPartitionContext)
// halt topic deletion for the partitions being reassigned
deleteTopicManager.haltTopicDeletion(Set(topic))
// mark topic ineligible for deletion for the partitions being reassigned
deleteTopicManager.markTopicIneligibleForDeletion(Set(topic))
onPartitionReassignment(topicAndPartition, reassignedPartitionContext)
} else {
// some replica in RAR is not alive. Fail partition reassignment
@ -601,16 +610,16 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -601,16 +610,16 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
}
}
def onPreferredReplicaElection(partitions: Set[TopicAndPartition]) {
def onPreferredReplicaElection(partitions: Set[TopicAndPartition], isTriggeredByAutoRebalance: Boolean = false) {
info("Starting preferred replica leader election for partitions %s".format(partitions.mkString(",")))
try {
controllerContext.partitionsUndergoingPreferredReplicaElection ++= partitions
deleteTopicManager.haltTopicDeletion(partitions.map(_.topic))
deleteTopicManager.markTopicIneligibleForDeletion(partitions.map(_.topic))
partitionStateMachine.handleStateChanges(partitions, OnlinePartition, preferredReplicaPartitionLeaderSelector)
} catch {
case e: Throwable => error("Error completing preferred replica leader election for partitions %s".format(partitions.mkString(",")), e)
} finally {
removePartitionsFromPreferredReplicaElection(partitions)
removePartitionsFromPreferredReplicaElection(partitions, isTriggeredByAutoRebalance)
deleteTopicManager.resumeDeletionForTopics(partitions.map(_.topic))
}
}
@ -638,15 +647,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -638,15 +647,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
def shutdown() = {
inLock(controllerContext.controllerLock) {
isRunning = false
partitionStateMachine.shutdown()
replicaStateMachine.shutdown()
if (config.autoLeaderRebalanceEnable)
autoRebalanceScheduler.shutdown()
if(controllerContext.controllerChannelManager != null) {
controllerContext.controllerChannelManager.shutdown()
controllerContext.controllerChannelManager = null
info("Controller shutdown complete")
}
onControllerResignation()
}
}
@ -748,17 +749,16 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -748,17 +749,16 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
private def initializeTopicDeletion() {
val topicsQueuedForDeletion = ZkUtils.getChildrenParentMayNotExist(zkClient, ZkUtils.DeleteTopicsPath).toSet
val replicasOnDeadBrokers = controllerContext.partitionReplicaAssignment.filter(r =>
r._2.foldLeft(false)((res,r) => res || !controllerContext.liveBrokerIds.contains(r)))
val topicsWithReplicasOnDeadBrokers = replicasOnDeadBrokers.map(_._1.topic).toSet
val topicsWithReplicasOnDeadBrokers = controllerContext.partitionReplicaAssignment.filter { case(partition, replicas) =>
replicas.exists(r => !controllerContext.liveBrokerIds.contains(r)) }.keySet.map(_.topic)
val topicsForWhichPartitionReassignmentIsInProgress = controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
val topicsForWhichPreferredReplicaElectionIsInProgress = controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
val haltedTopicsForDeletion = topicsWithReplicasOnDeadBrokers | topicsForWhichPartitionReassignmentIsInProgress |
val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | topicsForWhichPartitionReassignmentIsInProgress |
topicsForWhichPreferredReplicaElectionIsInProgress
info("List of topics to be deleted: %s".format(topicsQueuedForDeletion.mkString(",")))
info("List of topics halted for deletion: %s".format(haltedTopicsForDeletion.mkString(",")))
info("List of topics ineligible for deletion: %s".format(topicsIneligibleForDeletion.mkString(",")))
// initialize the topic deletion manager
deleteTopicManager = new TopicDeletionManager(this, topicsQueuedForDeletion, haltedTopicsForDeletion)
deleteTopicManager = new TopicDeletionManager(this, topicsQueuedForDeletion, topicsIneligibleForDeletion)
}
private def maybeTriggerPartitionReassignment() {
@ -913,7 +913,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -913,7 +913,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
}
}
def removePartitionsFromPreferredReplicaElection(partitionsToBeRemoved: Set[TopicAndPartition]) {
def removePartitionsFromPreferredReplicaElection(partitionsToBeRemoved: Set[TopicAndPartition],
isTriggeredByAutoRebalance : Boolean) {
for(partition <- partitionsToBeRemoved) {
// check the status
val currentLeader = controllerContext.partitionLeadershipInfo(partition).leaderAndIsr.leader
@ -924,7 +925,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -924,7 +925,8 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
warn("Partition %s failed to complete preferred replica leader election. Leader is %d".format(partition, currentLeader))
}
}
ZkUtils.deletePath(zkClient, ZkUtils.PreferredReplicaLeaderElectionPath)
if (!isTriggeredByAutoRebalance)
ZkUtils.deletePath(zkClient, ZkUtils.PreferredReplicaLeaderElectionPath)
controllerContext.partitionsUndergoingPreferredReplicaElection --= partitionsToBeRemoved
}
@ -933,9 +935,9 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -933,9 +935,9 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
* metadata requests
* @param brokers The brokers that the update metadata request should be sent to
*/
def sendUpdateMetadataRequest(brokers: Seq[Int]) {
def sendUpdateMetadataRequest(brokers: Seq[Int], partitions: Set[TopicAndPartition] = Set.empty[TopicAndPartition]) {
brokerRequestBatch.newBatch()
brokerRequestBatch.addUpdateMetadataRequestForBrokers(brokers)
brokerRequestBatch.addUpdateMetadataRequestForBrokers(brokers, partitions)
brokerRequestBatch.sendRequestsToBrokers(epoch, controllerContext.correlationId.getAndIncrement)
}
@ -967,7 +969,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -967,7 +969,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
"controller was elected with epoch %d. Aborting state change by this controller".format(controllerEpoch))
if (leaderAndIsr.isr.contains(replicaId)) {
// if the replica to be removed from the ISR is also the leader, set the new leader value to -1
val newLeader = if(replicaId == leaderAndIsr.leader) -1 else leaderAndIsr.leader
val newLeader = if (replicaId == leaderAndIsr.leader) LeaderAndIsr.NoLeader else leaderAndIsr.leader
val newLeaderAndIsr = new LeaderAndIsr(newLeader, leaderAndIsr.leaderEpoch + 1,
leaderAndIsr.isr.filter(b => b != replicaId), leaderAndIsr.zkVersion + 1)
// update the new leadership decision in zookeeper or retry
@ -1089,6 +1091,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -1089,6 +1091,7 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
topicsNotInPreferredReplica =
topicAndPartitionsForBroker.filter {
case(topicPartition, replicas) => {
controllerContext.partitionLeadershipInfo.contains(topicPartition) &&
controllerContext.partitionLeadershipInfo(topicPartition).leaderAndIsr.leader != leaderBroker
}
}
@ -1101,26 +1104,19 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg @@ -1101,26 +1104,19 @@ class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logg
// check ratio and if greater than desired ratio, trigger a rebalance for the topic partitions
// that need to be on this broker
if (imbalanceRatio > (config.leaderImbalancePerBrokerPercentage.toDouble / 100)) {
inLock(controllerContext.controllerLock) {
// do this check only if the broker is live and there are no partitions being reassigned currently
// and preferred replica election is not in progress
if (controllerContext.liveBrokerIds.contains(leaderBroker) &&
controllerContext.partitionsBeingReassigned.size == 0 &&
controllerContext.partitionsUndergoingPreferredReplicaElection.size == 0) {
val zkPath = ZkUtils.PreferredReplicaLeaderElectionPath
val partitionsList = topicsNotInPreferredReplica.keys.map(e => Map("topic" -> e.topic, "partition" -> e.partition))
val jsonData = Json.encode(Map("version" -> 1, "partitions" -> partitionsList))
try {
ZkUtils.createPersistentPath(zkClient, zkPath, jsonData)
info("Created preferred replica election path with %s".format(jsonData))
} catch {
case e2: ZkNodeExistsException =>
val partitionsUndergoingPreferredReplicaElection =
PreferredReplicaLeaderElectionCommand.parsePreferredReplicaElectionData(ZkUtils.readData(zkClient, zkPath)._1)
error("Preferred replica leader election currently in progress for " +
"%s. Aborting operation".format(partitionsUndergoingPreferredReplicaElection));
case e3: Throwable =>
error("Error while trying to auto rebalance topics %s".format(topicsNotInPreferredReplica.keys))
topicsNotInPreferredReplica.foreach {
case(topicPartition, replicas) => {
inLock(controllerContext.controllerLock) {
// do this check only if the broker is live and there are no partitions being reassigned currently
// and preferred replica election is not in progress
if (controllerContext.liveBrokerIds.contains(leaderBroker) &&
controllerContext.partitionsBeingReassigned.size == 0 &&
controllerContext.partitionsUndergoingPreferredReplicaElection.size == 0 &&
!deleteTopicManager.isTopicQueuedUpForDeletion(topicPartition.topic) &&
!deleteTopicManager.isTopicDeletionInProgress(topicPartition.topic) &&
controllerContext.allTopics.contains(topicPartition.topic)) {
onPreferredReplicaElection(Set(topicPartition), true)
}
}
}
}

46
core/src/main/scala/kafka/controller/PartitionStateMachine.scala

@ -50,7 +50,10 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -50,7 +50,10 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
private val hasStarted = new AtomicBoolean(false)
private val noOpPartitionLeaderSelector = new NoOpLeaderSelector(controllerContext)
this.logIdent = "[Partition state machine on Controller " + controllerId + "]: "
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
private var topicChangeListener: TopicChangeListener = null
private var deleteTopicsListener: DeleteTopicsListener = null
private var addPartitionsListener: mutable.Map[String, AddPartitionsListener] = mutable.Map.empty
/**
* Invoked on successful controller election. First registers a topic change listener since that triggers all
@ -69,7 +72,8 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -69,7 +72,8 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
// register topic and partition change listeners
def registerListeners() {
registerTopicChangeListener()
registerDeleteTopicListener()
if(controller.config.deleteTopicEnable)
registerDeleteTopicListener()
}
/**
@ -167,8 +171,9 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -167,8 +171,9 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
assignReplicasToPartitions(topic, partition)
partitionState.put(topicAndPartition, NewPartition)
val assignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition).mkString(",")
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from NotExists to New with assigned replicas %s"
.format(controllerId, controller.epoch, topicAndPartition, assignedReplicas))
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from %s to %s with assigned replicas %s"
.format(controllerId, controller.epoch, topicAndPartition, currState, targetState,
assignedReplicas))
// post: partition has been assigned replicas
case OnlinePartition =>
assertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OnlinePartition)
@ -184,22 +189,22 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -184,22 +189,22 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
}
partitionState.put(topicAndPartition, OnlinePartition)
val leader = controllerContext.partitionLeadershipInfo(topicAndPartition).leaderAndIsr.leader
stateChangeLogger.trace("Controller %d epoch %d changed partition %s from %s to OnlinePartition with leader %d"
.format(controllerId, controller.epoch, topicAndPartition, partitionState(topicAndPartition), leader))
stateChangeLogger.trace("Controller %d epoch %d changed partition %s from %s to %s with leader %d"
.format(controllerId, controller.epoch, topicAndPartition, currState, targetState, leader))
// post: partition has a leader
case OfflinePartition =>
// pre: partition should be in New or Online state
assertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OfflinePartition)
// should be called when the leader for a partition is no longer alive
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from Online to Offline"
.format(controllerId, controller.epoch, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from %s to %s"
.format(controllerId, controller.epoch, topicAndPartition, currState, targetState))
partitionState.put(topicAndPartition, OfflinePartition)
// post: partition has no alive leader
case NonExistentPartition =>
// pre: partition should be in Offline state
assertValidPreviousStates(topicAndPartition, List(OfflinePartition), NonExistentPartition)
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from Offline to NotExists"
.format(controllerId, controller.epoch, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed partition %s state from %s to %s"
.format(controllerId, controller.epoch, topicAndPartition, currState, targetState))
partitionState.put(topicAndPartition, NonExistentPartition)
// post: partition state is deleted from all brokers and zookeeper
}
@ -358,15 +363,22 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -358,15 +363,22 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
}
private def registerTopicChangeListener() = {
zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath, new TopicChangeListener())
topicChangeListener = new TopicChangeListener()
zkClient.subscribeChildChanges(ZkUtils.BrokerTopicsPath, topicChangeListener)
}
def registerPartitionChangeListener(topic: String) = {
zkClient.subscribeDataChanges(ZkUtils.getTopicPath(topic), new AddPartitionsListener(topic))
addPartitionsListener.put(topic, new AddPartitionsListener(topic))
zkClient.subscribeDataChanges(ZkUtils.getTopicPath(topic), addPartitionsListener(topic))
}
def deregisterPartitionChangeListener(topic: String) = {
zkClient.unsubscribeDataChanges(ZkUtils.getTopicPath(topic), addPartitionsListener(topic))
}
private def registerDeleteTopicListener() = {
zkClient.subscribeChildChanges(ZkUtils.DeleteTopicsPath, new DeleteTopicsListener())
deleteTopicsListener = new DeleteTopicsListener()
zkClient.subscribeChildChanges(ZkUtils.DeleteTopicsPath, deleteTopicsListener)
}
private def getLeaderIsrAndEpochOrThrowException(topic: String, partition: Int): LeaderIsrAndControllerEpoch = {
@ -438,21 +450,23 @@ class PartitionStateMachine(controller: KafkaController) extends Logging { @@ -438,21 +450,23 @@ class PartitionStateMachine(controller: KafkaController) extends Logging {
}
debug("Delete topics listener fired for topics %s to be deleted".format(topicsToBeDeleted.mkString(",")))
val nonExistentTopics = topicsToBeDeleted.filter(t => !controllerContext.allTopics.contains(t))
if(nonExistentTopics.size > 0)
if(nonExistentTopics.size > 0) {
warn("Ignoring request to delete non-existing topics " + nonExistentTopics.mkString(","))
nonExistentTopics.foreach(topic => ZkUtils.deletePathRecursive(zkClient, ZkUtils.getDeleteTopicPath(topic)))
}
topicsToBeDeleted --= nonExistentTopics
if(topicsToBeDeleted.size > 0) {
info("Starting topic deletion for topics " + topicsToBeDeleted.mkString(","))
// add topic to deletion list
controller.deleteTopicManager.enqueueTopicsForDeletion(topicsToBeDeleted)
// halt if other state changes are in progress
// mark topic ineligible for deletion if other state changes are in progress
topicsToBeDeleted.foreach { topic =>
val preferredReplicaElectionInProgress =
controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic).contains(topic)
val partitionReassignmentInProgress =
controllerContext.partitionsBeingReassigned.keySet.map(_.topic).contains(topic)
if(preferredReplicaElectionInProgress || partitionReassignmentInProgress)
controller.deleteTopicManager.haltTopicDeletion(Set(topic))
controller.deleteTopicManager.markTopicIneligibleForDeletion(Set(topic))
}
}
}

64
core/src/main/scala/kafka/controller/ReplicaStateMachine.scala

@ -40,7 +40,7 @@ import kafka.utils.Utils._ @@ -40,7 +40,7 @@ import kafka.utils.Utils._
* 4. ReplicaDeletionStarted: If replica deletion starts, it is moved to this state. Valid previous state is OfflineReplica
* 5. ReplicaDeletionSuccessful: If replica responds with no error code in response to a delete replica request, it is
* moved to this state. Valid previous state is ReplicaDeletionStarted
* 6. ReplicaDeletionFailed: If replica deletion fails, it is moved to this state. Valid previous state is ReplicaDeletionStarted
* 6. ReplicaDeletionIneligible: If replica deletion fails, it is moved to this state. Valid previous state is ReplicaDeletionStarted
* 7. NonExistentReplica: If a replica is deleted successfully, it is moved to this state. Valid previous state is
* ReplicaDeletionSuccessful
*/
@ -52,7 +52,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -52,7 +52,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
val brokerRequestBatch = new ControllerBrokerRequestBatch(controller)
private val hasStarted = new AtomicBoolean(false)
this.logIdent = "[Replica state machine on controller " + controller.config.brokerId + "]: "
private val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
private val stateChangeLogger = KafkaController.stateChangeLogger
/**
* Invoked on successful controller election. First registers a broker change listener since that triggers all
@ -115,7 +115,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -115,7 +115,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
* --send LeaderAndIsr request with current leader and isr to the new replica and UpdateMetadata request for the
* partition to every live broker
*
* NewReplica,OnlineReplica,OfflineReplica,ReplicaDeletionFailed -> OfflineReplica
* NewReplica,OnlineReplica,OfflineReplica,ReplicaDeletionIneligible -> OfflineReplica
* --send StopReplicaRequest to the replica (w/o deletion)
* --remove this replica from the isr and send LeaderAndIsr request (with new isr) to the leader replica and
* UpdateMetadata request for the partition to every live broker.
@ -126,7 +126,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -126,7 +126,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
* ReplicaDeletionStarted -> ReplicaDeletionSuccessful
* -- mark the state of the replica in the state machine
*
* ReplicaDeletionStarted -> ReplicaDeletionFailed
* ReplicaDeletionStarted -> ReplicaDeletionIneligible
* -- mark the state of the replica in the state machine
*
* ReplicaDeletionSuccessful -> NonExistentReplica
@ -146,8 +146,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -146,8 +146,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
throw new StateChangeFailedException(("Controller %d epoch %d initiated state change of replica %d for partition %s " +
"to %s failed because replica state machine has not started")
.format(controllerId, controller.epoch, replicaId, topicAndPartition, targetState))
val currState = replicaState.getOrElseUpdate(partitionAndReplica, NonExistentReplica)
try {
replicaState.getOrElseUpdate(partitionAndReplica, NonExistentReplica)
val replicaAssignment = controllerContext.partitionReplicaAssignment(topicAndPartition)
targetState match {
case NewReplica =>
@ -165,45 +165,47 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -165,45 +165,47 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
case None => // new leader request will be sent to this replica when one gets elected
}
replicaState.put(partitionAndReplica, NewReplica)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to NewReplica"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState,
targetState))
case ReplicaDeletionStarted =>
assertValidPreviousStates(partitionAndReplica, List(OfflineReplica), targetState)
replicaState.put(partitionAndReplica, ReplicaDeletionStarted)
// send stop replica command
brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = true,
callbacks.stopReplicaResponseCallback)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to ReplicaDeletionStarted"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
case ReplicaDeletionFailed =>
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
case ReplicaDeletionIneligible =>
assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionStarted), targetState)
replicaState.put(partitionAndReplica, ReplicaDeletionFailed)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to ReplicaDeletionFailed"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
replicaState.put(partitionAndReplica, ReplicaDeletionIneligible)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
case ReplicaDeletionSuccessful =>
assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionStarted), targetState)
replicaState.put(partitionAndReplica, ReplicaDeletionSuccessful)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to ReplicaDeletionSuccessful"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
case NonExistentReplica =>
assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionSuccessful), targetState)
// remove this replica from the assigned replicas list for its partition
val currentAssignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition)
controllerContext.partitionReplicaAssignment.put(topicAndPartition, currentAssignedReplicas.filterNot(_ == replicaId))
replicaState.remove(partitionAndReplica)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to NonExistentReplica"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
case OnlineReplica =>
assertValidPreviousStates(partitionAndReplica,
List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionFailed), targetState)
List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionIneligible), targetState)
replicaState(partitionAndReplica) match {
case NewReplica =>
// add this replica to the assigned replicas list for its partition
val currentAssignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition)
if(!currentAssignedReplicas.contains(replicaId))
controllerContext.partitionReplicaAssignment.put(topicAndPartition, currentAssignedReplicas :+ replicaId)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to OnlineReplica"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState,
targetState))
case _ =>
// check if the leader for this partition ever existed
controllerContext.partitionLeadershipInfo.get(topicAndPartition) match {
@ -211,8 +213,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -211,8 +213,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
brokerRequestBatch.addLeaderAndIsrRequestForBrokers(List(replicaId), topic, partition, leaderIsrAndControllerEpoch,
replicaAssignment)
replicaState.put(partitionAndReplica, OnlineReplica)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to OnlineReplica"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
case None => // that means the partition was never in OnlinePartition state, this means the broker never
// started a log for that partition and does not have a high watermark value for this partition
}
@ -220,7 +222,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -220,7 +222,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
replicaState.put(partitionAndReplica, OnlineReplica)
case OfflineReplica =>
assertValidPreviousStates(partitionAndReplica,
List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionFailed), targetState)
List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionIneligible), targetState)
// send stop replica command to the replica so that it stops fetching from the leader
brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = false)
// As an optimization, the controller removes dead replicas from the ISR
@ -233,8 +235,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -233,8 +235,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
brokerRequestBatch.addLeaderAndIsrRequestForBrokers(List(updatedLeaderIsrAndControllerEpoch.leaderAndIsr.leader),
topic, partition, updatedLeaderIsrAndControllerEpoch, replicaAssignment)
replicaState.put(partitionAndReplica, OfflineReplica)
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s to OfflineReplica"
.format(controllerId, controller.epoch, replicaId, topicAndPartition))
stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
false
case None =>
true
@ -250,8 +252,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -250,8 +252,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
}
catch {
case t: Throwable =>
stateChangeLogger.error("Controller %d epoch %d initiated state change of replica %d for partition [%s,%d] to %s failed"
.format(controllerId, controller.epoch, replicaId, topic, partition, targetState), t)
stateChangeLogger.error("Controller %d epoch %d initiated state change of replica %d for partition [%s,%d] from %s to %s failed"
.format(controllerId, controller.epoch, replicaId, topic, partition, currState, targetState), t)
}
}
@ -273,7 +275,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -273,7 +275,7 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
}
def replicasInDeletionStates(topic: String): Set[PartitionAndReplica] = {
val deletionStates = Set(ReplicaDeletionStarted, ReplicaDeletionSuccessful, ReplicaDeletionFailed)
val deletionStates = Set(ReplicaDeletionStarted, ReplicaDeletionSuccessful, ReplicaDeletionIneligible)
replicaState.filter(r => r._1.topic.equals(topic) && deletionStates.contains(r._2)).keySet
}
@ -304,8 +306,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging { @@ -304,8 +306,8 @@ class ReplicaStateMachine(controller: KafkaController) extends Logging {
case false =>
// mark replicas on dead brokers as failed for topic deletion, if they belong to a topic to be deleted.
// This is required during controller failover since during controller failover a broker can go down,
// so the replicas on that broker should be moved to ReplicaDeletionFailed to be on the safer side.
replicaState.put(partitionAndReplica, ReplicaDeletionFailed)
// so the replicas on that broker should be moved to ReplicaDeletionIneligible to be on the safer side.
replicaState.put(partitionAndReplica, ReplicaDeletionIneligible)
}
}
}
@ -356,7 +358,7 @@ case object OnlineReplica extends ReplicaState { val state: Byte = 2 } @@ -356,7 +358,7 @@ case object OnlineReplica extends ReplicaState { val state: Byte = 2 }
case object OfflineReplica extends ReplicaState { val state: Byte = 3 }
case object ReplicaDeletionStarted extends ReplicaState { val state: Byte = 4}
case object ReplicaDeletionSuccessful extends ReplicaState { val state: Byte = 5}
case object ReplicaDeletionFailed extends ReplicaState { val state: Byte = 6}
case object ReplicaDeletionIneligible extends ReplicaState { val state: Byte = 6}
case object NonExistentReplica extends ReplicaState { val state: Byte = 7 }

169
core/src/main/scala/kafka/controller/TopicDeletionManager.scala

@ -22,6 +22,8 @@ import kafka.utils.Utils._ @@ -22,6 +22,8 @@ import kafka.utils.Utils._
import collection.Set
import kafka.common.{ErrorMapping, TopicAndPartition}
import kafka.api.{StopReplicaResponse, RequestOrResponse}
import java.util.concurrent.locks.ReentrantLock
import java.util.concurrent.atomic.AtomicBoolean
/**
* This manages the state machine for topic deletion.
@ -30,8 +32,8 @@ import kafka.api.{StopReplicaResponse, RequestOrResponse} @@ -30,8 +32,8 @@ import kafka.api.{StopReplicaResponse, RequestOrResponse}
* 3. The controller has a background thread that handles topic deletion. The purpose of having this background thread
* is to accommodate the TTL feature, when we have it. This thread is signaled whenever deletion for a topic needs to
* be started or resumed. Currently, a topic's deletion can be started only by the onPartitionDeletion callback on the
* controller. In the future, it can be triggered based on the configured TTL for the topic. A topic's deletion will
* be halted in the following scenarios -
* controller. In the future, it can be triggered based on the configured TTL for the topic. A topic will be ineligible
* for deletion in the following scenarios -
* 3.1 broker hosting one of the replicas for that topic goes down
* 3.2 partition reassignment for partitions of that topic is in progress
* 3.3 preferred replica election for partitions of that topic is in progress
@ -62,37 +64,45 @@ import kafka.api.{StopReplicaResponse, RequestOrResponse} @@ -62,37 +64,45 @@ import kafka.api.{StopReplicaResponse, RequestOrResponse}
* it marks the topic for deletion retry.
* @param controller
* @param initialTopicsToBeDeleted The topics that are queued up for deletion in zookeeper at the time of controller failover
* @param initialHaltedTopicsForDeletion The topics for which deletion is halted due to any of the conditions mentioned in #3 above
* @param initialTopicsIneligibleForDeletion The topics ineligible for deletion due to any of the conditions mentioned in #3 above
*/
class TopicDeletionManager(controller: KafkaController,
initialTopicsToBeDeleted: Set[String] = Set.empty,
initialHaltedTopicsForDeletion: Set[String] = Set.empty) extends Logging {
initialTopicsIneligibleForDeletion: Set[String] = Set.empty) extends Logging {
val controllerContext = controller.controllerContext
val partitionStateMachine = controller.partitionStateMachine
val replicaStateMachine = controller.replicaStateMachine
var topicsToBeDeleted: mutable.Set[String] = mutable.Set.empty[String] ++ initialTopicsToBeDeleted
var haltedTopicsForDeletion: mutable.Set[String] = mutable.Set.empty[String] ++
(initialHaltedTopicsForDeletion & initialTopicsToBeDeleted)
val deleteTopicsCond = controllerContext.controllerLock.newCondition()
var deleteTopicStateChanged: Boolean = false
val topicsToBeDeleted: mutable.Set[String] = mutable.Set.empty[String] ++ initialTopicsToBeDeleted
val partitionsToBeDeleted: mutable.Set[TopicAndPartition] = topicsToBeDeleted.flatMap(controllerContext.partitionsForTopic)
val deleteLock = new ReentrantLock()
val topicsIneligibleForDeletion: mutable.Set[String] = mutable.Set.empty[String] ++
(initialTopicsIneligibleForDeletion & initialTopicsToBeDeleted)
val deleteTopicsCond = deleteLock.newCondition()
val deleteTopicStateChanged: AtomicBoolean = new AtomicBoolean(false)
var deleteTopicsThread: DeleteTopicsThread = null
val isDeleteTopicEnabled = controller.config.deleteTopicEnable
/**
* Invoked at the end of new controller initiation
*/
def start() {
deleteTopicsThread = new DeleteTopicsThread()
deleteTopicStateChanged = true
deleteTopicsThread.start()
if(isDeleteTopicEnabled) {
deleteTopicsThread = new DeleteTopicsThread()
deleteTopicStateChanged.set(true)
deleteTopicsThread.start()
}
}
/**
* Invoked when the current controller resigns. At this time, all state for topic deletion should be cleared
*/
def shutdown() {
deleteTopicsThread.shutdown()
topicsToBeDeleted.clear()
haltedTopicsForDeletion.clear()
if(isDeleteTopicEnabled) {
deleteTopicsThread.shutdown()
topicsToBeDeleted.clear()
partitionsToBeDeleted.clear()
topicsIneligibleForDeletion.clear()
}
}
/**
@ -102,8 +112,11 @@ class TopicDeletionManager(controller: KafkaController, @@ -102,8 +112,11 @@ class TopicDeletionManager(controller: KafkaController,
* @param topics Topics that should be deleted
*/
def enqueueTopicsForDeletion(topics: Set[String]) {
topicsToBeDeleted ++= topics
resumeTopicDeletionThread()
if(isDeleteTopicEnabled) {
topicsToBeDeleted ++= topics
partitionsToBeDeleted ++= topics.flatMap(controllerContext.partitionsForTopic)
resumeTopicDeletionThread()
}
}
/**
@ -115,30 +128,34 @@ class TopicDeletionManager(controller: KafkaController, @@ -115,30 +128,34 @@ class TopicDeletionManager(controller: KafkaController,
* @param topics Topics for which deletion can be resumed
*/
def resumeDeletionForTopics(topics: Set[String] = Set.empty) {
val topicsToResumeDeletion = topics & topicsToBeDeleted
if(topicsToResumeDeletion.size > 0) {
haltedTopicsForDeletion --= topicsToResumeDeletion
resumeTopicDeletionThread()
if(isDeleteTopicEnabled) {
val topicsToResumeDeletion = topics & topicsToBeDeleted
if(topicsToResumeDeletion.size > 0) {
topicsIneligibleForDeletion --= topicsToResumeDeletion
resumeTopicDeletionThread()
}
}
}
/**
* Invoked when a broker that hosts replicas for topics to be deleted goes down. Also invoked when the callback for
* StopReplicaResponse receives an error code for the replicas of a topic to be deleted. As part of this, the replicas
* are moved from ReplicaDeletionStarted to ReplicaDeletionFailed state. Also, the topic is added to the list of topics
* for which deletion is halted until further notice. The delete topic thread is notified so it can retry topic deletion
* are moved from ReplicaDeletionStarted to ReplicaDeletionIneligible state. Also, the topic is added to the list of topics
* ineligible for deletion until further notice. The delete topic thread is notified so it can retry topic deletion
* if it has received a response for all replicas of a topic to be deleted
* @param replicas Replicas for which deletion has failed
*/
def failReplicaDeletion(replicas: Set[PartitionAndReplica]) {
val replicasThatFailedToDelete = replicas.filter(r => isTopicQueuedUpForDeletion(r.topic))
if(replicasThatFailedToDelete.size > 0) {
val topics = replicasThatFailedToDelete.map(_.topic)
debug("Deletion failed for replicas %s. Halting deletion for topics %s"
.format(replicasThatFailedToDelete.mkString(","), topics))
controller.replicaStateMachine.handleStateChanges(replicasThatFailedToDelete, ReplicaDeletionFailed)
haltTopicDeletion(topics)
resumeTopicDeletionThread()
if(isDeleteTopicEnabled) {
val replicasThatFailedToDelete = replicas.filter(r => isTopicQueuedUpForDeletion(r.topic))
if(replicasThatFailedToDelete.size > 0) {
val topics = replicasThatFailedToDelete.map(_.topic)
debug("Deletion failed for replicas %s. Halting deletion for topics %s"
.format(replicasThatFailedToDelete.mkString(","), topics))
controller.replicaStateMachine.handleStateChanges(replicasThatFailedToDelete, ReplicaDeletionIneligible)
markTopicIneligibleForDeletion(topics)
resumeTopicDeletionThread()
}
}
}
@ -147,25 +164,36 @@ class TopicDeletionManager(controller: KafkaController, @@ -147,25 +164,36 @@ class TopicDeletionManager(controller: KafkaController,
* 1. replicas being down
* 2. partition reassignment in progress for some partitions of the topic
* 3. preferred replica election in progress for some partitions of the topic
* @param topics Topics for which deletion should be halted. No op if the topic is was not previously queued up for deletion
* @param topics Topics that should be marked ineligible for deletion. No op if the topic is was not previously queued up for deletion
*/
def haltTopicDeletion(topics: Set[String]) {
val newTopicsToHaltDeletion = topicsToBeDeleted & topics
haltedTopicsForDeletion ++= newTopicsToHaltDeletion
if(newTopicsToHaltDeletion.size > 0)
info("Halted deletion of topics %s".format(newTopicsToHaltDeletion.mkString(",")))
def markTopicIneligibleForDeletion(topics: Set[String]) {
if(isDeleteTopicEnabled) {
val newTopicsToHaltDeletion = topicsToBeDeleted & topics
topicsIneligibleForDeletion ++= newTopicsToHaltDeletion
if(newTopicsToHaltDeletion.size > 0)
info("Halted deletion of topics %s".format(newTopicsToHaltDeletion.mkString(",")))
}
}
def isTopicDeletionHalted(topic: String): Boolean = {
haltedTopicsForDeletion.contains(topic)
def isTopicIneligibleForDeletion(topic: String): Boolean = {
if(isDeleteTopicEnabled) {
topicsIneligibleForDeletion.contains(topic)
} else
true
}
def isTopicDeletionInProgress(topic: String): Boolean = {
controller.replicaStateMachine.isAtLeastOneReplicaInDeletionStartedState(topic)
if(isDeleteTopicEnabled) {
controller.replicaStateMachine.isAtLeastOneReplicaInDeletionStartedState(topic)
} else
false
}
def isTopicQueuedUpForDeletion(topic: String): Boolean = {
topicsToBeDeleted.contains(topic)
if(isDeleteTopicEnabled) {
topicsToBeDeleted.contains(topic)
} else
false
}
/**
@ -173,19 +201,22 @@ class TopicDeletionManager(controller: KafkaController, @@ -173,19 +201,22 @@ class TopicDeletionManager(controller: KafkaController,
* controllerLock should be acquired before invoking this API
*/
private def awaitTopicDeletionNotification() {
while(!deleteTopicStateChanged) {
info("Waiting for signal to start or continue topic deletion")
deleteTopicsCond.await()
inLock(deleteLock) {
while(!deleteTopicStateChanged.compareAndSet(true, false)) {
info("Waiting for signal to start or continue topic deletion")
deleteTopicsCond.await()
}
}
deleteTopicStateChanged = false
}
/**
* Signals the delete-topic-thread to process topic deletion
*/
private def resumeTopicDeletionThread() {
deleteTopicStateChanged = true
deleteTopicsCond.signal()
deleteTopicStateChanged.set(true)
inLock(deleteLock) {
deleteTopicsCond.signal()
}
}
/**
@ -205,26 +236,29 @@ class TopicDeletionManager(controller: KafkaController, @@ -205,26 +236,29 @@ class TopicDeletionManager(controller: KafkaController,
* Topic deletion can be retried if -
* 1. Topic deletion is not already complete
* 2. Topic deletion is currently not in progress for that topic
* 3. Topic deletion is currently halted for that topic
* 3. Topic is currently marked ineligible for deletion
* @param topic Topic
* @return Whether or not deletion can be retried for the topic
*/
private def isTopicEligibleForDeletion(topic: String): Boolean = {
topicsToBeDeleted.contains(topic) && (!isTopicDeletionInProgress(topic) && !isTopicDeletionHalted(topic))
topicsToBeDeleted.contains(topic) && (!isTopicDeletionInProgress(topic) && !isTopicIneligibleForDeletion(topic))
}
/**
* If the topic is queued for deletion but deletion is not currently under progress, then deletion is retried for that topic
* To ensure a successful retry, reset states for respective replicas from ReplicaDeletionFailed to OfflineReplica state
* To ensure a successful retry, reset states for respective replicas from ReplicaDeletionIneligible to OfflineReplica state
*@param topic Topic for which deletion should be retried
*/
private def markTopicForDeletionRetry(topic: String) {
// reset replica states from ReplicaDeletionFailed to OfflineReplica
val failedReplicas = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionFailed)
// reset replica states from ReplicaDeletionIneligible to OfflineReplica
val failedReplicas = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionIneligible)
controller.replicaStateMachine.handleStateChanges(failedReplicas, OfflineReplica)
}
private def completeDeleteTopic(topic: String) {
// deregister partition change listener on the deleted topic. This is to prevent the partition change listener
// firing before the new topic listener when a deleted topic gets auto created
partitionStateMachine.deregisterPartitionChangeListener(topic)
val replicasForDeletedTopic = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionSuccessful)
// controller will remove this replica from the state machine as well as its partition assignment cache
replicaStateMachine.handleStateChanges(replicasForDeletedTopic, NonExistentReplica)
@ -233,6 +267,7 @@ class TopicDeletionManager(controller: KafkaController, @@ -233,6 +267,7 @@ class TopicDeletionManager(controller: KafkaController,
partitionStateMachine.handleStateChanges(partitionsForDeletedTopic, OfflinePartition)
partitionStateMachine.handleStateChanges(partitionsForDeletedTopic, NonExistentPartition)
topicsToBeDeleted -= topic
partitionsToBeDeleted.retain(_.topic != topic)
controllerContext.zkClient.deleteRecursive(ZkUtils.getTopicPath(topic))
controllerContext.zkClient.deleteRecursive(ZkUtils.getTopicConfigPath(topic))
controllerContext.zkClient.delete(ZkUtils.getDeleteTopicPath(topic))
@ -245,6 +280,9 @@ class TopicDeletionManager(controller: KafkaController, @@ -245,6 +280,9 @@ class TopicDeletionManager(controller: KafkaController,
*/
private def onTopicDeletion(topics: Set[String]) {
info("Topic deletion callback for %s".format(topics.mkString(",")))
// send update metadata so that brokers stop serving data for topics to be deleted
val partitions = topics.flatMap(controllerContext.partitionsForTopic)
controller.sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq, partitions)
val partitionReplicaAssignmentByTopic = controllerContext.partitionReplicaAssignment.groupBy(p => p._1.topic)
topics.foreach { topic =>
onPartitionDeletion(partitionReplicaAssignmentByTopic(topic).map(_._1).toSet)
@ -257,42 +295,40 @@ class TopicDeletionManager(controller: KafkaController, @@ -257,42 +295,40 @@ class TopicDeletionManager(controller: KafkaController,
* the topics are added to the in progress list. As long as a topic is in the in progress list, deletion for that topic
* is never retried. A topic is removed from the in progress list when
* 1. Either the topic is successfully deleted OR
* 2. No replica for the topic is in ReplicaDeletionStarted state and at least one replica is in ReplicaDeletionFailed state
* 2. No replica for the topic is in ReplicaDeletionStarted state and at least one replica is in ReplicaDeletionIneligible state
* If the topic is queued for deletion but deletion is not currently under progress, then deletion is retried for that topic
* As part of starting deletion, all replicas are moved to the ReplicaDeletionStarted state where the controller sends
* the replicas a StopReplicaRequest (delete=true)
* This callback does the following things -
* 1. Send metadata request to all brokers excluding the topics to be deleted
* 2. Move all dead replicas directly to ReplicaDeletionFailed state. Also halt the deletion of respective topics if
* some replicas are dead since it won't complete successfully anyway
* 2. Move all dead replicas directly to ReplicaDeletionIneligible state. Also mark the respective topics ineligible
* for deletion if some replicas are dead since it won't complete successfully anyway
* 3. Move all alive replicas to ReplicaDeletionStarted state so they can be deleted successfully
*@param replicasForTopicsToBeDeleted
*/
private def startReplicaDeletion(replicasForTopicsToBeDeleted: Set[PartitionAndReplica]) {
replicasForTopicsToBeDeleted.groupBy(_.topic).foreach { case(topic, replicas) =>
// send update metadata so that brokers stop serving data
controller.sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)
var aliveReplicasForTopic = controllerContext.allLiveReplicas().filter(p => p.topic.equals(topic))
val deadReplicasForTopic = replicasForTopicsToBeDeleted -- aliveReplicasForTopic
val successfullyDeletedReplicas = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionSuccessful)
val replicasForDeletionRetry = aliveReplicasForTopic -- successfullyDeletedReplicas
// move dead replicas directly to failed state
replicaStateMachine.handleStateChanges(deadReplicasForTopic, ReplicaDeletionFailed)
replicaStateMachine.handleStateChanges(deadReplicasForTopic, ReplicaDeletionIneligible)
// send stop replica to all followers that are not in the OfflineReplica state so they stop sending fetch requests to the leader
replicaStateMachine.handleStateChanges(replicasForDeletionRetry, OfflineReplica)
debug("Deletion started for replicas %s".format(replicasForDeletionRetry.mkString(",")))
controller.replicaStateMachine.handleStateChanges(replicasForDeletionRetry, ReplicaDeletionStarted,
new Callbacks.CallbackBuilder().stopReplicaCallback(deleteTopicStopReplicaCallback).build)
if(deadReplicasForTopic.size > 0)
haltTopicDeletion(Set(topic))
markTopicIneligibleForDeletion(Set(topic))
}
}
/**
* This callback is invoked by the delete topic callback with the list of partitions for topics to be deleted
* It does the following -
* 1. Send UpdateMetadataRequest to all live brokers (that are not shutting down) with all partitions except those for
* which the topics are being deleted. The brokers start rejecting all client requests with UnknownTopicOrPartitionException
* 1. Send UpdateMetadataRequest to all live brokers (that are not shutting down) for partitions that are being
* deleted. The brokers start rejecting all client requests with UnknownTopicOrPartitionException
* 2. Move all replicas for the partitions to OfflineReplica state. This will send StopReplicaRequest to the replicas
* and LeaderAndIsrRequest to the leader with the shrunk ISR. When the leader replica itself is moved to OfflineReplica state,
* it will skip sending the LeaderAndIsrRequest since the leader will be updated to -1
@ -314,7 +350,7 @@ class TopicDeletionManager(controller: KafkaController, @@ -314,7 +350,7 @@ class TopicDeletionManager(controller: KafkaController,
stopReplicaResponse.responseMap.filter(p => p._2 != ErrorMapping.NoError).map(_._1).toSet
val replicasInError = partitionsInError.map(p => PartitionAndReplica(p.topic, p.partition, replicaId))
inLock(controllerContext.controllerLock) {
// move all the failed replicas to ReplicaDeletionFailed
// move all the failed replicas to ReplicaDeletionIneligible
failReplicaDeletion(replicasInError)
if(replicasInError.size != stopReplicaResponse.responseMap.size) {
// some replicas could have been successfully deleted
@ -327,8 +363,9 @@ class TopicDeletionManager(controller: KafkaController, @@ -327,8 +363,9 @@ class TopicDeletionManager(controller: KafkaController,
class DeleteTopicsThread() extends ShutdownableThread("delete-topics-thread") {
val zkClient = controllerContext.zkClient
override def doWork() {
awaitTopicDeletionNotification()
inLock(controllerContext.controllerLock) {
awaitTopicDeletionNotification()
val topicsQueuedForDeletion = Set.empty[String] ++ topicsToBeDeleted
if(topicsQueuedForDeletion.size > 0)
info("Handling deletion for topics " + topicsQueuedForDeletion.mkString(","))
@ -350,7 +387,7 @@ class TopicDeletionManager(controller: KafkaController, @@ -350,7 +387,7 @@ class TopicDeletionManager(controller: KafkaController,
// if you come here, then no replica is in TopicDeletionStarted and all replicas are not in
// TopicDeletionSuccessful. That means, there is at least one failed replica, which means topic deletion
// should be retried
val replicasInTopicDeletionFailedState = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionFailed)
val replicasInTopicDeletionFailedState = controller.replicaStateMachine.replicasInState(topic, ReplicaDeletionIneligible)
// mark topic for deletion retry
markTopicForDeletionRetry(topic)
info("Retrying delete topic for topic %s since replicas %s were not successfully deleted"
@ -362,8 +399,8 @@ class TopicDeletionManager(controller: KafkaController, @@ -362,8 +399,8 @@ class TopicDeletionManager(controller: KafkaController,
info("Deletion of topic %s (re)started".format(topic))
// topic deletion will be kicked off
onTopicDeletion(Set(topic))
} else if(isTopicDeletionHalted(topic)) {
info("Not retrying deletion of topic %s at this time since it is halted".format(topic))
} else if(isTopicIneligibleForDeletion(topic)) {
info("Not retrying deletion of topic %s at this time since it is marked ineligible for deletion".format(topic))
}
}
}

2
core/src/main/scala/kafka/log/CleanerConfig.scala

@ -35,7 +35,7 @@ case class CleanerConfig(val numThreads: Int = 1, @@ -35,7 +35,7 @@ case class CleanerConfig(val numThreads: Int = 1,
val ioBufferSize: Int = 1024*1024,
val maxMessageSize: Int = 32*1024*1024,
val maxIoBytesPerSecond: Double = Double.MaxValue,
val backOffMs: Long = 60 * 1000,
val backOffMs: Long = 15 * 1000,
val enableCleaner: Boolean = true,
val hashAlgorithm: String = "MD5") {
}

3
core/src/main/scala/kafka/log/Log.scala

@ -75,6 +75,9 @@ class Log(val dir: File, @@ -75,6 +75,9 @@ class Log(val dir: File,
newGauge(name + "-" + "LogEndOffset",
new Gauge[Long] { def value = logEndOffset })
newGauge(name + "-" + "Size",
new Gauge[Long] {def value = size})
/** The name of this log */
def name = dir.getName()

58
core/src/main/scala/kafka/log/LogCleaner.scala

@ -25,6 +25,8 @@ import java.io.File @@ -25,6 +25,8 @@ import java.io.File
import kafka.common._
import kafka.message._
import kafka.utils._
import kafka.metrics.KafkaMetricsGroup
import com.yammer.metrics.core.Gauge
import java.lang.IllegalStateException
/**
@ -63,7 +65,8 @@ import java.lang.IllegalStateException @@ -63,7 +65,8 @@ import java.lang.IllegalStateException
class LogCleaner(val config: CleanerConfig,
val logDirs: Array[File],
val logs: Pool[TopicAndPartition, Log],
time: Time = SystemTime) extends Logging {
time: Time = SystemTime) extends Logging with KafkaMetricsGroup {
/* for managing the state of partitions being cleaned. */
private val cleanerManager = new LogCleanerManager(logDirs, logs);
@ -71,11 +74,33 @@ class LogCleaner(val config: CleanerConfig, @@ -71,11 +74,33 @@ class LogCleaner(val config: CleanerConfig,
private val throttler = new Throttler(desiredRatePerSec = config.maxIoBytesPerSecond,
checkIntervalMs = 300,
throttleDown = true,
"cleaner-io",
"bytes",
time = time)
/* the threads */
private val cleaners = (0 until config.numThreads).map(new CleanerThread(_))
/* a metric to track the maximum utilization of any thread's buffer in the last cleaning */
newGauge("max-buffer-utilization-percent",
new Gauge[Int] {
def value: Int = cleaners.map(_.lastStats).map(100 * _.bufferUtilization).max.toInt
})
/* a metric to track the recopy rate of each thread's last cleaning */
newGauge("cleaner-recopy-percent",
new Gauge[Int] {
def value: Int = {
val stats = cleaners.map(_.lastStats)
val recopyRate = stats.map(_.bytesWritten).sum.toDouble / math.max(stats.map(_.bytesRead).sum, 1)
(100 * recopyRate).toInt
}
})
/* a metric to track the maximum cleaning time for the last cleaning from each thread */
newGauge("max-clean-time-secs",
new Gauge[Int] {
def value: Int = cleaners.map(_.lastStats).map(_.elapsedSecs).max.toInt
})
/**
* Start the background cleaning
*/
@ -131,6 +156,9 @@ class LogCleaner(val config: CleanerConfig, @@ -131,6 +156,9 @@ class LogCleaner(val config: CleanerConfig,
*/
private class CleanerThread(threadId: Int)
extends ShutdownableThread(name = "kafka-log-cleaner-thread-" + threadId, isInterruptible = false) {
override val loggerName = classOf[LogCleaner].getName
if(config.dedupeBufferSize / config.numThreads > Int.MaxValue)
warn("Cannot use more than 2G of cleaner buffer space per cleaner thread, ignoring excess buffer space...")
@ -144,6 +172,8 @@ class LogCleaner(val config: CleanerConfig, @@ -144,6 +172,8 @@ class LogCleaner(val config: CleanerConfig,
time = time,
checkDone = checkDone)
@volatile var lastStats: CleanerStats = new CleanerStats()
private def checkDone(topicAndPartition: TopicAndPartition) {
if (!isRunning.get())
throw new ThreadShutdownException
@ -170,7 +200,7 @@ class LogCleaner(val config: CleanerConfig, @@ -170,7 +200,7 @@ class LogCleaner(val config: CleanerConfig,
var endOffset = cleanable.firstDirtyOffset
try {
endOffset = cleaner.clean(cleanable)
logStats(cleaner.id, cleanable.log.name, cleanable.firstDirtyOffset, endOffset, cleaner.stats)
recordStats(cleaner.id, cleanable.log.name, cleanable.firstDirtyOffset, endOffset, cleaner.stats)
} catch {
case pe: LogCleaningAbortedException => // task can be aborted, let it go.
} finally {
@ -182,10 +212,12 @@ class LogCleaner(val config: CleanerConfig, @@ -182,10 +212,12 @@ class LogCleaner(val config: CleanerConfig,
/**
* Log out statistics on a single run of the cleaner.
*/
def logStats(id: Int, name: String, from: Long, to: Long, stats: CleanerStats) {
def recordStats(id: Int, name: String, from: Long, to: Long, stats: CleanerStats) {
this.lastStats = stats
cleaner.statsUnderlying.swap
def mb(bytes: Double) = bytes / (1024*1024)
val message =
"%n\tLog cleaner %d cleaned log %s (dirty section = [%d, %d])%n".format(id, name, from, to) +
"%n\tLog cleaner thread %d cleaned log %s (dirty section = [%d, %d])%n".format(id, name, from, to) +
"\t%,.1f MB of log processed in %,.1f seconds (%,.1f MB/sec).%n".format(mb(stats.bytesRead),
stats.elapsedSecs,
mb(stats.bytesRead/stats.elapsedSecs)) +
@ -193,6 +225,7 @@ class LogCleaner(val config: CleanerConfig, @@ -193,6 +225,7 @@ class LogCleaner(val config: CleanerConfig,
stats.elapsedIndexSecs,
mb(stats.mapBytesRead)/stats.elapsedIndexSecs,
100 * stats.elapsedIndexSecs.toDouble/stats.elapsedSecs) +
"\tBuffer utilization: %.1f%%%n".format(100 * stats.bufferUtilization) +
"\tCleaned %,.1f MB in %.1f seconds (%,.1f Mb/sec, %.1f%% of total time)%n".format(mb(stats.bytesRead),
stats.elapsedSecs - stats.elapsedIndexSecs,
mb(stats.bytesRead)/(stats.elapsedSecs - stats.elapsedIndexSecs), 100 * (stats.elapsedSecs - stats.elapsedIndexSecs).toDouble/stats.elapsedSecs) +
@ -215,19 +248,22 @@ class LogCleaner(val config: CleanerConfig, @@ -215,19 +248,22 @@ class LogCleaner(val config: CleanerConfig,
* @param time The time instance
*/
private[log] class Cleaner(val id: Int,
offsetMap: OffsetMap,
val offsetMap: OffsetMap,
ioBufferSize: Int,
maxIoBufferSize: Int,
dupBufferLoadFactor: Double,
throttler: Throttler,
time: Time,
checkDone: (TopicAndPartition) => Unit) extends Logging {
override val loggerName = classOf[LogCleaner].getName
this.logIdent = "Cleaner " + id + ": "
/* stats on this cleaning */
val stats = new CleanerStats(time)
/* cleaning stats - one instance for the current (or next) cleaning cycle and one for the last completed cycle */
val statsUnderlying = (new CleanerStats(time), new CleanerStats(time))
def stats = statsUnderlying._1
/* buffer used for read i/o */
private var readBuffer = ByteBuffer.allocate(ioBufferSize)
@ -264,8 +300,12 @@ private[log] class Cleaner(val id: Int, @@ -264,8 +300,12 @@ private[log] class Cleaner(val id: Int,
info("Cleaning log %s (discarding tombstones prior to %s)...".format(log.name, new Date(deleteHorizonMs)))
for (group <- groupSegmentsBySize(log.logSegments(0, endOffset), log.config.segmentSize, log.config.maxIndexSize))
cleanSegments(log, group, offsetMap, deleteHorizonMs)
// record buffer utilization
stats.bufferUtilization = offsetMap.utilization
stats.allDone()
endOffset
}
@ -499,6 +539,7 @@ private[log] class Cleaner(val id: Int, @@ -499,6 +539,7 @@ private[log] class Cleaner(val id: Int,
*/
private case class CleanerStats(time: Time = SystemTime) {
var startTime, mapCompleteTime, endTime, bytesRead, bytesWritten, mapBytesRead, mapMessagesRead, messagesRead, messagesWritten = 0L
var bufferUtilization = 0.0d
clear()
def readMessage(size: Int) {
@ -538,6 +579,7 @@ private case class CleanerStats(time: Time = SystemTime) { @@ -538,6 +579,7 @@ private case class CleanerStats(time: Time = SystemTime) {
mapMessagesRead = 0L
messagesRead = 0L
messagesWritten = 0L
bufferUtilization = 0.0d
}
}

41
core/src/main/scala/kafka/log/LogCleanerManager.scala

@ -18,6 +18,8 @@ @@ -18,6 +18,8 @@
package kafka.log
import java.io.File
import kafka.metrics.KafkaMetricsGroup
import com.yammer.metrics.core.Gauge
import kafka.utils.{Logging, Pool}
import kafka.server.OffsetCheckpoint
import collection.mutable
@ -39,7 +41,10 @@ private[log] case object LogCleaningPaused extends LogCleaningState @@ -39,7 +41,10 @@ private[log] case object LogCleaningPaused extends LogCleaningState
* While a partition is in the LogCleaningPaused state, it won't be scheduled for cleaning again, until cleaning is
* requested to be resumed.
*/
private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[TopicAndPartition, Log]) extends Logging {
private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[TopicAndPartition, Log]) extends Logging with KafkaMetricsGroup {
override val loggerName = classOf[LogCleaner].getName
/* the offset checkpoints holding the last cleaned point for each log */
private val checkpoints = logDirs.map(dir => (dir, new OffsetCheckpoint(new File(dir, "cleaner-offset-checkpoint")))).toMap
@ -48,8 +53,13 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To @@ -48,8 +53,13 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To
/* a global lock used to control all access to the in-progress set and the offset checkpoints */
private val lock = new ReentrantLock
/* for coordinating the pausing and the cleaning of a partition */
private val pausedCleaningCond = lock.newCondition()
/* a gauge for tracking the cleanable ratio of the dirtiest log */
@volatile private var dirtiestLogCleanableRatio = 0.0
newGauge("max-dirty-percent", new Gauge[Int] { def value = (100 * dirtiestLogCleanableRatio).toInt })
/**
* @return the position processed for all logs.
@ -65,15 +75,17 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To @@ -65,15 +75,17 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To
def grabFilthiestLog(): Option[LogToClean] = {
inLock(lock) {
val lastClean = allCleanerCheckpoints()
val cleanableLogs = logs.filter(l => l._2.config.dedupe) // skip any logs marked for delete rather than dedupe
.filterNot(l => inProgress.contains(l._1)) // skip any logs already in-progress
.map(l => LogToClean(l._1, l._2, lastClean.getOrElse(l._1, 0))) // create a LogToClean instance for each
val dirtyLogs = cleanableLogs.filter(l => l.totalBytes > 0) // must have some bytes
.filter(l => l.cleanableRatio > l.log.config.minCleanableRatio) // and must meet the minimum threshold for dirty byte ratio
if(dirtyLogs.isEmpty) {
val dirtyLogs = logs.filter(l => l._2.config.compact) // skip any logs marked for delete rather than dedupe
.filterNot(l => inProgress.contains(l._1)) // skip any logs already in-progress
.map(l => LogToClean(l._1, l._2, // create a LogToClean instance for each
lastClean.getOrElse(l._1, l._2.logSegments.head.baseOffset)))
.filter(l => l.totalBytes > 0) // skip any empty logs
this.dirtiestLogCleanableRatio = if (!dirtyLogs.isEmpty) dirtyLogs.max.cleanableRatio else 0
val cleanableLogs = dirtyLogs.filter(l => l.cleanableRatio > l.log.config.minCleanableRatio) // and must meet the minimum threshold for dirty byte ratio
if(cleanableLogs.isEmpty) {
None
} else {
val filthiest = dirtyLogs.max
val filthiest = cleanableLogs.max
inProgress.put(filthiest.topicPartition, LogCleaningInProgress)
Some(filthiest)
}
@ -113,7 +125,8 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To @@ -113,7 +125,8 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To
case LogCleaningInProgress =>
inProgress.put(topicAndPartition, LogCleaningAborted)
case s =>
throw new IllegalStateException(("Partiiton %s can't be aborted and pasued since it's in %s state").format(topicAndPartition, s))
throw new IllegalStateException("Compaction for partition %s cannot be aborted and paused since it is in %s state."
.format(topicAndPartition, s))
}
}
while (!isCleaningInState(topicAndPartition, LogCleaningPaused))
@ -129,17 +142,19 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To @@ -129,17 +142,19 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To
inLock(lock) {
inProgress.get(topicAndPartition) match {
case None =>
throw new IllegalStateException(("Partiiton %s can't be resumed since it's never paused").format(topicAndPartition))
throw new IllegalStateException("Compaction for partition %s cannot be resumed since it is not paused."
.format(topicAndPartition))
case Some(state) =>
state match {
case LogCleaningPaused =>
inProgress.remove(topicAndPartition)
case s =>
throw new IllegalStateException(("Partiiton %s can't be resumed since it's in %s state").format(topicAndPartition, s))
throw new IllegalStateException("Compaction for partition %s cannot be resumed since it is in %s state."
.format(topicAndPartition, s))
}
}
}
info("The cleaning for partition %s is resumed".format(topicAndPartition))
info("Compaction for partition %s is resumed".format(topicAndPartition))
}
/**
@ -181,7 +196,7 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To @@ -181,7 +196,7 @@ private[log] class LogCleanerManager(val logDirs: Array[File], val logs: Pool[To
inProgress.put(topicAndPartition, LogCleaningPaused)
pausedCleaningCond.signalAll()
case s =>
throw new IllegalStateException(("In-progress partiiton %s can't be in %s state").format(topicAndPartition, s))
throw new IllegalStateException("In-progress partition %s cannot be in %s state.".format(topicAndPartition, s))
}
}
}

8
core/src/main/scala/kafka/log/LogConfig.scala

@ -34,7 +34,7 @@ import kafka.common._ @@ -34,7 +34,7 @@ import kafka.common._
* @param fileDeleteDelayMs The time to wait before deleting a file from the filesystem
* @param deleteRetentionMs The time to retain delete markers in the log. Only applicable for logs that are being compacted.
* @param minCleanableRatio The ratio of bytes that are available for cleaning to the bytes already cleaned
* @param dedupe Should old segments in this log be deleted or deduplicated?
* @param compact Should old segments in this log be deleted or deduplicated?
*/
case class LogConfig(val segmentSize: Int = 1024*1024,
val segmentMs: Long = Long.MaxValue,
@ -48,7 +48,7 @@ case class LogConfig(val segmentSize: Int = 1024*1024, @@ -48,7 +48,7 @@ case class LogConfig(val segmentSize: Int = 1024*1024,
val fileDeleteDelayMs: Long = 60*1000,
val deleteRetentionMs: Long = 24 * 60 * 60 * 1000L,
val minCleanableRatio: Double = 0.5,
val dedupe: Boolean = false) {
val compact: Boolean = false) {
def toProps: Properties = {
val props = new Properties()
@ -65,7 +65,7 @@ case class LogConfig(val segmentSize: Int = 1024*1024, @@ -65,7 +65,7 @@ case class LogConfig(val segmentSize: Int = 1024*1024,
props.put(DeleteRetentionMsProp, deleteRetentionMs.toString)
props.put(FileDeleteDelayMsProp, fileDeleteDelayMs.toString)
props.put(MinCleanableDirtyRatioProp, minCleanableRatio.toString)
props.put(CleanupPolicyProp, if(dedupe) "dedupe" else "delete")
props.put(CleanupPolicyProp, if(compact) "compact" else "delete")
props
}
@ -117,7 +117,7 @@ object LogConfig { @@ -117,7 +117,7 @@ object LogConfig {
fileDeleteDelayMs = props.getProperty(FileDeleteDelayMsProp).toInt,
deleteRetentionMs = props.getProperty(DeleteRetentionMsProp).toLong,
minCleanableRatio = props.getProperty(MinCleanableDirtyRatioProp).toDouble,
dedupe = props.getProperty(CleanupPolicyProp).trim.toLowerCase == "dedupe")
compact = props.getProperty(CleanupPolicyProp).trim.toLowerCase != "delete")
}
/**

4
core/src/main/scala/kafka/log/LogManager.scala

@ -52,7 +52,7 @@ class LogManager(val logDirs: Array[File], @@ -52,7 +52,7 @@ class LogManager(val logDirs: Array[File],
private val logs = new Pool[TopicAndPartition, Log]()
createAndValidateLogDirs(logDirs)
private var dirLocks = lockLogDirs(logDirs)
private val dirLocks = lockLogDirs(logDirs)
private val recoveryPointCheckpoints = logDirs.map(dir => (dir, new OffsetCheckpoint(new File(dir, RecoveryPointCheckpointFile)))).toMap
loadLogs(logDirs)
@ -351,7 +351,7 @@ class LogManager(val logDirs: Array[File], @@ -351,7 +351,7 @@ class LogManager(val logDirs: Array[File],
debug("Beginning log cleanup...")
var total = 0
val startMs = time.milliseconds
for(log <- allLogs; if !log.config.dedupe) {
for(log <- allLogs; if !log.config.compact) {
debug("Garbage collecting '" + log.name + "'")
total += cleanupExpiredSegments(log) + cleanupSegmentsToMaintainSize(log)
}

9
core/src/main/scala/kafka/network/RequestChannel.scala

@ -31,6 +31,9 @@ import org.apache.log4j.Logger @@ -31,6 +31,9 @@ import org.apache.log4j.Logger
object RequestChannel extends Logging {
val AllDone = new Request(1, 2, getShutdownReceive(), 0)
val requestLogger = new RequestLogger("kafka.request.logger")
case class RequestLogger(override val loggerName: String) extends Logging
def getShutdownReceive() = {
val emptyProducerRequest = new ProducerRequest(0, 0, "", 0, 0, collection.mutable.Map[TopicAndPartition, ByteBufferMessageSet]())
@ -49,7 +52,7 @@ object RequestChannel extends Logging { @@ -49,7 +52,7 @@ object RequestChannel extends Logging {
val requestId = buffer.getShort()
val requestObj: RequestOrResponse = RequestKeys.deserializerForKey(requestId)(buffer)
buffer = null
private val requestLogger = Logger.getLogger("kafka.request.logger")
private val requestLogger = RequestChannel.requestLogger
trace("Processor %d received request : %s".format(processor, requestObj))
def updateRequestMetrics() {
@ -81,10 +84,10 @@ object RequestChannel extends Logging { @@ -81,10 +84,10 @@ object RequestChannel extends Logging {
m.responseSendTimeHist.update(responseSendTime)
m.totalTimeHist.update(totalTime)
}
if(requestLogger.isTraceEnabled)
if(requestLogger.logger.isTraceEnabled)
requestLogger.trace("Completed request:%s from client %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
.format(requestObj.describe(true), remoteAddress, totalTime, requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
else if(requestLogger.isDebugEnabled) {
else {
requestLogger.debug("Completed request:%s from client %s;totalTime:%d,requestQueueTime:%d,localTime:%d,remoteTime:%d,responseQueueTime:%d,sendTime:%d"
.format(requestObj.describe(false), remoteAddress, totalTime, requestQueueTime, apiLocalTime, apiRemoteTime, responseQueueTime, responseSendTime))
}

260
core/src/main/scala/kafka/server/KafkaApis.scala

@ -27,13 +27,15 @@ import scala.collection._ @@ -27,13 +27,15 @@ import scala.collection._
import java.util.concurrent.TimeUnit
import java.util.concurrent.atomic._
import kafka.metrics.KafkaMetricsGroup
import org.I0Itec.zkclient.ZkClient
import kafka.common._
import kafka.utils.{ZkUtils, Pool, SystemTime, Logging}
import kafka.network.RequestChannel.Response
import kafka.cluster.Broker
import kafka.controller.KafkaController
import kafka.utils.Utils.inLock
import org.I0Itec.zkclient.ZkClient
import java.util.concurrent.locks.ReentrantReadWriteLock
import kafka.controller.KafkaController.StateChangeLogger
/**
* Logic to handle the various Kafka requests
@ -52,12 +54,127 @@ class KafkaApis(val requestChannel: RequestChannel, @@ -52,12 +54,127 @@ class KafkaApis(val requestChannel: RequestChannel,
private val delayedRequestMetrics = new DelayedRequestMetrics
/* following 3 data structures are updated by the update metadata request
* and is queried by the topic metadata request. */
var metadataCache: mutable.Map[TopicAndPartition, PartitionStateInfo] =
new mutable.HashMap[TopicAndPartition, PartitionStateInfo]()
private val aliveBrokers: mutable.Map[Int, Broker] = new mutable.HashMap[Int, Broker]()
private val partitionMetadataLock = new Object
var metadataCache = new MetadataCache
this.logIdent = "[KafkaApi-%d] ".format(brokerId)
class MetadataCache {
private val cache: mutable.Map[String, mutable.Map[Int, PartitionStateInfo]] =
new mutable.HashMap[String, mutable.Map[Int, PartitionStateInfo]]()
private val aliveBrokers: mutable.Map[Int, Broker] = new mutable.HashMap[Int, Broker]()
private val partitionMetadataLock = new ReentrantReadWriteLock()
def getTopicMetadata(topics: Set[String]) = {
val isAllTopics = topics.isEmpty
val topicsRequested = if(isAllTopics) cache.keySet else topics
val topicResponses: mutable.ListBuffer[TopicMetadata] = new mutable.ListBuffer[TopicMetadata]
inLock(partitionMetadataLock.readLock()) {
for (topic <- topicsRequested) {
if (isAllTopics || this.containsTopic(topic)) {
val partitionStateInfos = cache(topic)
val partitionMetadata = partitionStateInfos.map { case (partitionId, partitionState) =>
val replicas = partitionState.allReplicas
val replicaInfo: Seq[Broker] = replicas.map(aliveBrokers.getOrElse(_, null)).filter(_ != null).toSeq
var leaderInfo: Option[Broker] = None
var isrInfo: Seq[Broker] = Nil
val leaderIsrAndEpoch = partitionState.leaderIsrAndControllerEpoch
val leader = leaderIsrAndEpoch.leaderAndIsr.leader
val isr = leaderIsrAndEpoch.leaderAndIsr.isr
val topicPartition = TopicAndPartition(topic, partitionId)
try {
leaderInfo = aliveBrokers.get(leader)
if (!leaderInfo.isDefined)
throw new LeaderNotAvailableException("Leader not available for %s.".format(topicPartition))
isrInfo = isr.map(aliveBrokers.getOrElse(_, null)).filter(_ != null)
if (replicaInfo.size < replicas.size)
throw new ReplicaNotAvailableException("Replica information not available for following brokers: " +
replicas.filterNot(replicaInfo.map(_.id).contains(_)).mkString(","))
if (isrInfo.size < isr.size)
throw new ReplicaNotAvailableException("In Sync Replica information not available for following brokers: " +
isr.filterNot(isrInfo.map(_.id).contains(_)).mkString(","))
new PartitionMetadata(partitionId, leaderInfo, replicaInfo, isrInfo, ErrorMapping.NoError)
} catch {
case e: Throwable =>
debug("Error while fetching metadata for %s. Possible cause: %s".format(topicPartition, e.getMessage))
new PartitionMetadata(partitionId, leaderInfo, replicaInfo, isrInfo,
ErrorMapping.codeFor(e.getClass.asInstanceOf[Class[Throwable]]))
}
}
topicResponses += new TopicMetadata(topic, partitionMetadata.toSeq)
}
}
}
topicResponses
}
def addPartitionInfo(topic: String,
partitionId: Int,
stateInfo: PartitionStateInfo) {
inLock(partitionMetadataLock.writeLock()) {
addPartitionInfoInternal(topic, partitionId, stateInfo)
}
}
def getPartitionInfos(topic: String) = {
inLock(partitionMetadataLock.readLock()) {
cache(topic)
}
}
def containsTopicAndPartition(topic: String,
partitionId: Int): Boolean = {
inLock(partitionMetadataLock.readLock()) {
cache.get(topic) match {
case Some(partitionInfos) => partitionInfos.contains(partitionId)
case None => false
}
}
}
def containsTopic(topic: String) = cache.contains(topic)
def updateCache(updateMetadataRequest: UpdateMetadataRequest,
brokerId: Int,
stateChangeLogger: StateChangeLogger) {
inLock(partitionMetadataLock.writeLock()) {
updateMetadataRequest.aliveBrokers.foreach(b => aliveBrokers.put(b.id, b))
val topicsToDelete = mutable.Set[String]()
updateMetadataRequest.partitionStateInfos.foreach { partitionState =>
if (partitionState._2.leaderIsrAndControllerEpoch.leaderAndIsr.leader == LeaderAndIsr.LeaderDuringDelete) {
topicsToDelete.add(partitionState._1.topic)
} else {
addPartitionInfoInternal(partitionState._1.topic, partitionState._1.partition, partitionState._2)
stateChangeLogger.trace(("Broker %d cached leader info %s for partition %s in response to " +
"UpdateMetadata request sent by controller %d epoch %d with correlation id %d")
.format(brokerId, partitionState._2, partitionState._1, updateMetadataRequest.controllerId,
updateMetadataRequest.controllerEpoch, updateMetadataRequest.correlationId))
}
}
topicsToDelete.foreach { topic =>
cache.remove(topic)
stateChangeLogger.trace(("Broker %d deleted partitions for topic %s from metadata cache in response to " +
"UpdateMetadata request sent by controller %d epoch %d with correlation id %d")
.format(brokerId, topic, updateMetadataRequest.controllerId,
updateMetadataRequest.controllerEpoch, updateMetadataRequest.correlationId))
}
}
}
private def addPartitionInfoInternal(topic: String,
partitionId: Int,
stateInfo: PartitionStateInfo) {
cache.get(topic) match {
case Some(infos) => infos.put(partitionId, stateInfo)
case None => {
val newInfos: mutable.Map[Int, PartitionStateInfo] = new mutable.HashMap[Int, PartitionStateInfo]
cache.put(topic, newInfos)
newInfos.put(partitionId, stateInfo)
}
}
}
}
/**
* Top-level method that handles all requests and multiplexes to the right api
*/
@ -87,7 +204,7 @@ class KafkaApis(val requestChannel: RequestChannel, @@ -87,7 +204,7 @@ class KafkaApis(val requestChannel: RequestChannel,
// ensureTopicExists is only for client facing requests
private def ensureTopicExists(topic: String) = {
if(!metadataCache.exists { case(topicAndPartition, partitionStateInfo) => topicAndPartition.topic.equals(topic)} )
if (!metadataCache.containsTopic(topic))
throw new UnknownTopicOrPartitionException("Topic " + topic + " either doesn't exist or is in the process of being deleted")
}
@ -132,35 +249,9 @@ class KafkaApis(val requestChannel: RequestChannel, @@ -132,35 +249,9 @@ class KafkaApis(val requestChannel: RequestChannel,
stateChangeLogger.warn(stateControllerEpochErrorMessage)
throw new ControllerMovedException(stateControllerEpochErrorMessage)
}
partitionMetadataLock synchronized {
replicaManager.controllerEpoch = updateMetadataRequest.controllerEpoch
replicaManager.controllerEpoch = updateMetadataRequest.controllerEpoch
// cache the list of alive brokers in the cluster
updateMetadataRequest.aliveBrokers.foreach(b => aliveBrokers.put(b.id, b))
updateMetadataRequest.partitionStateInfos.foreach { partitionState =>
metadataCache.put(partitionState._1, partitionState._2)
if(stateChangeLogger.isTraceEnabled)
stateChangeLogger.trace(("Broker %d cached leader info %s for partition %s in response to UpdateMetadata request " +
"sent by controller %d epoch %d with correlation id %d").format(brokerId, partitionState._2, partitionState._1,
updateMetadataRequest.controllerId, updateMetadataRequest.controllerEpoch, updateMetadataRequest.correlationId))
}
// remove the topics that don't exist in the UpdateMetadata request since those are the topics that are
// currently being deleted by the controller
val topicsKnownToThisBroker = metadataCache.map{
case(topicAndPartition, partitionStateInfo) => topicAndPartition.topic }.toSet
val topicsKnownToTheController = updateMetadataRequest.partitionStateInfos.map {
case(topicAndPartition, partitionStateInfo) => topicAndPartition.topic }.toSet
val deletedTopics = topicsKnownToThisBroker -- topicsKnownToTheController
val partitionsToBeDeleted = metadataCache.filter {
case(topicAndPartition, partitionStateInfo) => deletedTopics.contains(topicAndPartition.topic)
}.keySet
partitionsToBeDeleted.foreach { partition =>
metadataCache.remove(partition)
if(stateChangeLogger.isTraceEnabled)
stateChangeLogger.trace(("Broker %d deleted partition %s from metadata cache in response to UpdateMetadata request " +
"sent by controller %d epoch %d with correlation id %d").format(brokerId, partition,
updateMetadataRequest.controllerId, updateMetadataRequest.controllerEpoch, updateMetadataRequest.correlationId))
}
}
metadataCache.updateCache(updateMetadataRequest, brokerId, stateChangeLogger)
val updateMetadataResponse = new UpdateMetadataResponse(updateMetadataRequest.correlationId)
requestChannel.sendResponse(new Response(request, new BoundedByteBufferSend(updateMetadataResponse)))
}
@ -552,86 +643,33 @@ class KafkaApis(val requestChannel: RequestChannel, @@ -552,86 +643,33 @@ class KafkaApis(val requestChannel: RequestChannel,
*/
def handleTopicMetadataRequest(request: RequestChannel.Request) {
val metadataRequest = request.requestObj.asInstanceOf[TopicMetadataRequest]
val topicsMetadata = new mutable.ArrayBuffer[TopicMetadata]()
val config = replicaManager.config
var uniqueTopics = Set.empty[String]
uniqueTopics = {
if(metadataRequest.topics.size > 0)
metadataRequest.topics.toSet
else {
partitionMetadataLock synchronized {
metadataCache.keySet.map(_.topic)
}
}
}
val topicMetadataList =
partitionMetadataLock synchronized {
uniqueTopics.map { topic =>
if(metadataCache.keySet.map(_.topic).contains(topic)) {
val partitionStateInfo = metadataCache.filter(p => p._1.topic.equals(topic))
val sortedPartitions = partitionStateInfo.toList.sortWith((m1,m2) => m1._1.partition < m2._1.partition)
val partitionMetadata = sortedPartitions.map { case(topicAndPartition, partitionState) =>
val replicas = metadataCache(topicAndPartition).allReplicas
var replicaInfo: Seq[Broker] = replicas.map(aliveBrokers.getOrElse(_, null)).filter(_ != null).toSeq
var leaderInfo: Option[Broker] = None
var isrInfo: Seq[Broker] = Nil
val leaderIsrAndEpoch = partitionState.leaderIsrAndControllerEpoch
val leader = leaderIsrAndEpoch.leaderAndIsr.leader
val isr = leaderIsrAndEpoch.leaderAndIsr.isr
debug("%s".format(topicAndPartition) + ";replicas = " + replicas + ", in sync replicas = " + isr + ", leader = " + leader)
try {
if(aliveBrokers.keySet.contains(leader))
leaderInfo = Some(aliveBrokers(leader))
else throw new LeaderNotAvailableException("Leader not available for partition %s".format(topicAndPartition))
isrInfo = isr.map(aliveBrokers.getOrElse(_, null)).filter(_ != null)
if(replicaInfo.size < replicas.size)
throw new ReplicaNotAvailableException("Replica information not available for following brokers: " +
replicas.filterNot(replicaInfo.map(_.id).contains(_)).mkString(","))
if(isrInfo.size < isr.size)
throw new ReplicaNotAvailableException("In Sync Replica information not available for following brokers: " +
isr.filterNot(isrInfo.map(_.id).contains(_)).mkString(","))
new PartitionMetadata(topicAndPartition.partition, leaderInfo, replicaInfo, isrInfo, ErrorMapping.NoError)
} catch {
case e: Throwable =>
error("Error while fetching metadata for partition %s".format(topicAndPartition), e)
new PartitionMetadata(topicAndPartition.partition, leaderInfo, replicaInfo, isrInfo,
ErrorMapping.codeFor(e.getClass.asInstanceOf[Class[Throwable]]))
}
}
new TopicMetadata(topic, partitionMetadata)
} else {
// topic doesn't exist, send appropriate error code
new TopicMetadata(topic, Seq.empty[PartitionMetadata], ErrorMapping.UnknownTopicOrPartitionCode)
}
}
}
val topicMetadata = getTopicMetadata(metadataRequest.topics.toSet)
trace("Sending topic metadata %s for correlation id %d to client %s".format(topicMetadata.mkString(","), metadataRequest.correlationId, metadataRequest.clientId))
val response = new TopicMetadataResponse(topicMetadata, metadataRequest.correlationId)
requestChannel.sendResponse(new RequestChannel.Response(request, new BoundedByteBufferSend(response)))
}
// handle auto create topics
topicMetadataList.foreach { topicMetadata =>
topicMetadata.errorCode match {
case ErrorMapping.NoError => topicsMetadata += topicMetadata
case ErrorMapping.UnknownTopicOrPartitionCode =>
if (config.autoCreateTopicsEnable) {
try {
AdminUtils.createTopic(zkClient, topicMetadata.topic, config.numPartitions, config.defaultReplicationFactor)
info("Auto creation of topic %s with %d partitions and replication factor %d is successful!"
.format(topicMetadata.topic, config.numPartitions, config.defaultReplicationFactor))
} catch {
case e: TopicExistsException => // let it go, possibly another broker created this topic
}
topicsMetadata += new TopicMetadata(topicMetadata.topic, topicMetadata.partitionsMetadata, ErrorMapping.LeaderNotAvailableCode)
} else {
topicsMetadata += topicMetadata
private def getTopicMetadata(topics: Set[String]): Seq[TopicMetadata] = {
val topicResponses = metadataCache.getTopicMetadata(topics)
if (topics.size > 0 && topicResponses.size != topics.size) {
val nonExistentTopics = topics -- topicResponses.map(_.topic).toSet
val responsesForNonExistentTopics = nonExistentTopics.map { topic =>
if (config.autoCreateTopicsEnable) {
try {
AdminUtils.createTopic(zkClient, topic, config.numPartitions, config.defaultReplicationFactor)
info("Auto creation of topic %s with %d partitions and replication factor %d is successful!".format(topic, config.numPartitions, config.defaultReplicationFactor))
} catch {
case e: TopicExistsException => // let it go, possibly another broker created this topic
}
case _ =>
debug("Error while fetching topic metadata for topic %s due to %s ".format(topicMetadata.topic,
ErrorMapping.exceptionFor(topicMetadata.errorCode).getClass.getName))
topicsMetadata += topicMetadata
new TopicMetadata(topic, Seq.empty[PartitionMetadata], ErrorMapping.LeaderNotAvailableCode)
} else {
new TopicMetadata(topic, Seq.empty[PartitionMetadata], ErrorMapping.UnknownTopicOrPartitionCode)
}
}
topicResponses.appendAll(responsesForNonExistentTopics)
}
trace("Sending topic metadata %s for correlation id %d to client %s".format(topicsMetadata.mkString(","), metadataRequest.correlationId, metadataRequest.clientId))
val response = new TopicMetadataResponse(topicsMetadata.toSeq, metadataRequest.correlationId)
requestChannel.sendResponse(new RequestChannel.Response(request, new BoundedByteBufferSend(response)))
topicResponses
}
/*

7
core/src/main/scala/kafka/server/KafkaConfig.scala

@ -116,7 +116,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro @@ -116,7 +116,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro
/* the frequency in minutes that the log cleaner checks whether any log is eligible for deletion */
val logCleanupIntervalMs = props.getLongInRange("log.retention.check.interval.ms", 5*60*1000, (1, Long.MaxValue))
/* the default cleanup policy for segments beyond the retention window, must be either "delete" or "dedupe" */
/* the default cleanup policy for segments beyond the retention window, must be either "delete" or "compact" */
val logCleanupPolicy = props.getString("log.cleanup.policy", "delete")
/* the number of background threads to use for log cleaning */
@ -137,7 +137,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro @@ -137,7 +137,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro
val logCleanerDedupeBufferLoadFactor = props.getDouble("log.cleaner.io.buffer.load.factor", 0.9d)
/* the amount of time to sleep when there are no logs to clean */
val logCleanerBackoffMs = props.getLongInRange("log.cleaner.backoff.ms", 30*1000, (0L, Long.MaxValue))
val logCleanerBackoffMs = props.getLongInRange("log.cleaner.backoff.ms", 15*1000, (0L, Long.MaxValue))
/* the minimum ratio of dirty log to total log for a log to eligible for cleaning */
val logCleanerMinCleanRatio = props.getDouble("log.cleaner.min.cleanable.ratio", 0.5)
@ -248,4 +248,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro @@ -248,4 +248,7 @@ class KafkaConfig private (val props: VerifiableProperties) extends ZKConfig(pro
/* the maximum size for a metadata entry associated with an offset commit */
val offsetMetadataMaxSize = props.getInt("offset.metadata.max.bytes", 1024)
/* Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off */
val deleteTopicEnable = props.getBoolean("delete.topic.enable", false)
}

2
core/src/main/scala/kafka/server/KafkaServer.scala

@ -260,7 +260,7 @@ class KafkaServer(val config: KafkaConfig, time: Time = SystemTime) extends Logg @@ -260,7 +260,7 @@ class KafkaServer(val config: KafkaConfig, time: Time = SystemTime) extends Logg
deleteRetentionMs = config.logCleanerDeleteRetentionMs,
fileDeleteDelayMs = config.logDeleteDelayMs,
minCleanableRatio = config.logCleanerMinCleanRatio,
dedupe = config.logCleanupPolicy.trim.toLowerCase == "dedupe")
compact = config.logCleanupPolicy.trim.toLowerCase == "compact")
val defaultProps = defaultLogConfig.toProps
val configs = AdminUtils.fetchAllTopicConfigs(zkClient).mapValues(LogConfig.fromProps(defaultProps, _))
// read the log configurations from zookeeper

2
core/src/main/scala/kafka/server/OffsetCheckpoint.scala

@ -90,7 +90,7 @@ class OffsetCheckpoint(val file: File) extends Logging { @@ -90,7 +90,7 @@ class OffsetCheckpoint(val file: File) extends Logging {
val topic = pieces(0)
val partition = pieces(1).toInt
val offset = pieces(2).toLong
offsets += (TopicAndPartition(pieces(0), partition) -> offset)
offsets += (TopicAndPartition(topic, partition) -> offset)
line = reader.readLine()
}
if(offsets.size != expectedSize)

6
core/src/main/scala/kafka/server/ReplicaManager.scala

@ -53,10 +53,10 @@ class ReplicaManager(val config: KafkaConfig, @@ -53,10 +53,10 @@ class ReplicaManager(val config: KafkaConfig,
private val replicaStateChangeLock = new Object
val replicaFetcherManager = new ReplicaFetcherManager(config, this)
private val highWatermarkCheckPointThreadStarted = new AtomicBoolean(false)
val highWatermarkCheckpoints = config.logDirs.map(dir => (dir, new OffsetCheckpoint(new File(dir, ReplicaManager.HighWatermarkFilename)))).toMap
val highWatermarkCheckpoints = config.logDirs.map(dir => (new File(dir).getAbsolutePath, new OffsetCheckpoint(new File(dir, ReplicaManager.HighWatermarkFilename)))).toMap
private var hwThreadInitialized = false
this.logIdent = "[Replica Manager on Broker " + localBrokerId + "]: "
val stateChangeLogger = Logger.getLogger(KafkaController.stateChangeLogger)
val stateChangeLogger = KafkaController.stateChangeLogger
newGauge(
"LeaderCount",
@ -440,7 +440,7 @@ class ReplicaManager(val config: KafkaConfig, @@ -440,7 +440,7 @@ class ReplicaManager(val config: KafkaConfig,
*/
def checkpointHighWatermarks() {
val replicas = allPartitions.values.map(_.getReplica(config.brokerId)).collect{case Some(replica) => replica}
val replicasByDir = replicas.filter(_.log.isDefined).groupBy(_.log.get.dir.getParent)
val replicasByDir = replicas.filter(_.log.isDefined).groupBy(_.log.get.dir.getParentFile.getAbsolutePath)
for((dir, reps) <- replicasByDir) {
val hwms = reps.map(r => (new TopicAndPartition(r) -> r.highWatermark)).toMap
try {

52
core/src/main/scala/kafka/server/TopicConfigManager.scala

@ -40,6 +40,7 @@ import org.I0Itec.zkclient.{IZkChildListener, ZkClient} @@ -40,6 +40,7 @@ import org.I0Itec.zkclient.{IZkChildListener, ZkClient}
* To update a topic config we first update the topic config properties. Then we create a new sequential
* znode under the change path which contains the name of the topic that was updated, say
* /brokers/config_changes/config_change_13321
* This is just a notification--the actual config change is stored only once under the /brokers/topics/<topic_name>/config path.
*
* This will fire a watcher on all brokers. This watcher works as follows. It reads all the config change notifications.
* It keeps track of the highest config change suffix number it has applied previously. For any previously applied change it finds
@ -59,7 +60,7 @@ import org.I0Itec.zkclient.{IZkChildListener, ZkClient} @@ -59,7 +60,7 @@ import org.I0Itec.zkclient.{IZkChildListener, ZkClient}
*/
class TopicConfigManager(private val zkClient: ZkClient,
private val logManager: LogManager,
private val changeExpirationMs: Long = 10*60*1000,
private val changeExpirationMs: Long = 15*60*1000,
private val time: Time = SystemTime) extends Logging {
private var lastExecutedChange = -1L
@ -86,7 +87,7 @@ class TopicConfigManager(private val zkClient: ZkClient, @@ -86,7 +87,7 @@ class TopicConfigManager(private val zkClient: ZkClient,
*/
private def processConfigChanges(notifications: Seq[String]) {
if (notifications.size > 0) {
info("Processing %d topic config change notification(s)...".format(notifications.size))
info("Processing config change notification(s)...")
val now = time.milliseconds
val logs = logManager.logsByTopicPartition.toBuffer
val logsByTopic = logs.groupBy(_._1.topic).mapValues(_.map(_._2))
@ -94,26 +95,37 @@ class TopicConfigManager(private val zkClient: ZkClient, @@ -94,26 +95,37 @@ class TopicConfigManager(private val zkClient: ZkClient,
val changeId = changeNumber(notification)
if (changeId > lastExecutedChange) {
val changeZnode = ZkUtils.TopicConfigChangesPath + "/" + notification
val (topicJson, stat) = ZkUtils.readData(zkClient, changeZnode)
val topic = topicJson.substring(1, topicJson.length - 1) // dequote
if (logsByTopic.contains(topic)) {
/* combine the default properties with the overrides in zk to create the new LogConfig */
val props = new Properties(logManager.defaultConfig.toProps)
props.putAll(AdminUtils.fetchTopicConfig(zkClient, topic))
val logConfig = LogConfig.fromProps(props)
for (log <- logsByTopic(topic))
log.config = logConfig
lastExecutedChange = changeId
info("Processed topic config change %d for topic %s, setting new config to %s.".format(changeId, topic, props))
} else {
if (now - stat.getCtime > changeExpirationMs) {
/* this change is now obsolete, try to delete it unless it is the last change left */
error("Ignoring topic config change %d for topic %s since the change has expired")
} else {
error("Ignoring topic config change %d for topic %s since the topic may have been deleted")
val (jsonOpt, stat) = ZkUtils.readDataMaybeNull(zkClient, changeZnode)
if(jsonOpt.isDefined) {
val json = jsonOpt.get
val topic = json.substring(1, json.length - 1) // hacky way to dequote
if (logsByTopic.contains(topic)) {
/* combine the default properties with the overrides in zk to create the new LogConfig */
val props = new Properties(logManager.defaultConfig.toProps)
props.putAll(AdminUtils.fetchTopicConfig(zkClient, topic))
val logConfig = LogConfig.fromProps(props)
for (log <- logsByTopic(topic))
log.config = logConfig
info("Processed topic config change %d for topic %s, setting new config to %s.".format(changeId, topic, props))
purgeObsoleteNotifications(now, notifications)
}
ZkUtils.deletePath(zkClient, changeZnode)
}
lastExecutedChange = changeId
}
}
}
}
private def purgeObsoleteNotifications(now: Long, notifications: Seq[String]) {
for(notification <- notifications.sorted) {
val (jsonOpt, stat) = ZkUtils.readDataMaybeNull(zkClient, ZkUtils.TopicConfigChangesPath + "/" + notification)
if(jsonOpt.isDefined) {
val changeZnode = ZkUtils.TopicConfigChangesPath + "/" + notification
if (now - stat.getCtime > changeExpirationMs) {
debug("Purging config change notification " + notification)
ZkUtils.deletePath(zkClient, changeZnode)
} else {
return
}
}
}

12
core/src/main/scala/kafka/utils/Throttler.scala

@ -17,6 +17,8 @@ @@ -17,6 +17,8 @@
package kafka.utils;
import kafka.metrics.KafkaMetricsGroup
import java.util.concurrent.TimeUnit
import java.util.Random
import scala.math._
@ -33,14 +35,18 @@ import scala.math._ @@ -33,14 +35,18 @@ import scala.math._
@threadsafe
class Throttler(val desiredRatePerSec: Double,
val checkIntervalMs: Long = 100L,
val throttleDown: Boolean = true,
val time: Time = SystemTime) extends Logging {
val throttleDown: Boolean = true,
metricName: String = "throttler",
units: String = "entries",
val time: Time = SystemTime) extends Logging with KafkaMetricsGroup {
private val lock = new Object
private val meter = newMeter(metricName, units, TimeUnit.SECONDS)
private var periodStartNs: Long = time.nanoseconds
private var observedSoFar: Double = 0.0
def maybeThrottle(observed: Double) {
meter.mark(observed.toLong)
lock synchronized {
observedSoFar += observed
val now = time.nanoseconds
@ -72,7 +78,7 @@ object Throttler { @@ -72,7 +78,7 @@ object Throttler {
def main(args: Array[String]) {
val rand = new Random()
val throttler = new Throttler(100000, 100, true, SystemTime)
val throttler = new Throttler(100000, 100, true, time = SystemTime)
val interval = 30000
var start = System.currentTimeMillis
var total = 0

4
core/src/main/scala/kafka/utils/VerifiableProperties.scala

@ -124,14 +124,14 @@ class VerifiableProperties(val props: Properties) extends Logging { @@ -124,14 +124,14 @@ class VerifiableProperties(val props: Properties) extends Logging {
* Get a required argument as a double
* @param name The property name
* @return the value
* @throw IllegalArgumentException If the given property is not present
* @throws IllegalArgumentException If the given property is not present
*/
def getDouble(name: String): Double = getString(name).toDouble
/**
* Get an optional argument as a double
* @param name The property name
* @default The default value for the property if not present
* @param default The default value for the property if not present
*/
def getDouble(name: String, default: Double): Double = {
if(containsKey(name))

2
core/src/main/scala/kafka/utils/ZkUtils.scala

@ -101,7 +101,7 @@ object ZkUtils extends Logging { @@ -101,7 +101,7 @@ object ZkUtils extends Logging {
}
def setupCommonPaths(zkClient: ZkClient) {
for(path <- Seq(ConsumersPath, BrokerIdsPath, BrokerTopicsPath, TopicConfigChangesPath, TopicConfigPath))
for(path <- Seq(ConsumersPath, BrokerIdsPath, BrokerTopicsPath, TopicConfigChangesPath, TopicConfigPath, DeleteTopicsPath))
makeSurePersistentPathExists(zkClient, path)
}

9
core/src/test/scala/other/kafka/TestLogCleaning.scala

@ -243,11 +243,11 @@ object TestLogCleaning { @@ -243,11 +243,11 @@ object TestLogCleaning {
percentDeletes: Int): File = {
val producerProps = new Properties
producerProps.setProperty("producer.type", "async")
producerProps.setProperty("broker.list", brokerUrl)
producerProps.setProperty("metadata.broker.list", brokerUrl)
producerProps.setProperty("serializer.class", classOf[StringEncoder].getName)
producerProps.setProperty("key.serializer.class", classOf[StringEncoder].getName)
producerProps.setProperty("queue.enqueue.timeout.ms", "-1")
producerProps.setProperty("batch.size", 1000.toString)
producerProps.setProperty("batch.num.messages", 1000.toString)
val producer = new Producer[String, String](new ProducerConfig(producerProps))
val rand = new Random(1)
val keyCount = (messages / dups).toInt
@ -275,8 +275,9 @@ object TestLogCleaning { @@ -275,8 +275,9 @@ object TestLogCleaning {
def makeConsumer(zkUrl: String, topics: Array[String]): ZookeeperConsumerConnector = {
val consumerProps = new Properties
consumerProps.setProperty("group.id", "log-cleaner-test-" + new Random().nextInt(Int.MaxValue))
consumerProps.setProperty("zk.connect", zkUrl)
consumerProps.setProperty("consumer.timeout.ms", (10*1000).toString)
consumerProps.setProperty("zookeeper.connect", zkUrl)
consumerProps.setProperty("consumer.timeout.ms", (20*1000).toString)
consumerProps.setProperty("auto.offset.reset", "smallest")
new ZookeeperConsumerConnector(new ConsumerConfig(consumerProps))
}

10
core/src/test/scala/unit/kafka/admin/AdminTest.scala

@ -320,9 +320,9 @@ class AdminTest extends JUnit3Suite with ZooKeeperTestHarness with Logging { @@ -320,9 +320,9 @@ class AdminTest extends JUnit3Suite with ZooKeeperTestHarness with Logging {
try {
// wait for the update metadata request to trickle to the brokers
assertTrue("Topic test not created after timeout", TestUtils.waitUntilTrue(() =>
activeServers.foldLeft(true)(_ && _.apis.metadataCache(TopicAndPartition(topic, partition)).leaderIsrAndControllerEpoch.leaderAndIsr.isr.size != 3), 1000))
activeServers.foldLeft(true)(_ && _.apis.metadataCache.getPartitionInfos(topic)(partition).leaderIsrAndControllerEpoch.leaderAndIsr.isr.size != 3), 1000))
assertEquals(0, partitionsRemaining.size)
var partitionStateInfo = activeServers.head.apis.metadataCache(TopicAndPartition(topic, partition))
var partitionStateInfo = activeServers.head.apis.metadataCache.getPartitionInfos(topic)(partition)
var leaderAfterShutdown = partitionStateInfo.leaderIsrAndControllerEpoch.leaderAndIsr.leader
assertEquals(0, leaderAfterShutdown)
assertEquals(2, partitionStateInfo.leaderIsrAndControllerEpoch.leaderAndIsr.isr.size)
@ -331,15 +331,15 @@ class AdminTest extends JUnit3Suite with ZooKeeperTestHarness with Logging { @@ -331,15 +331,15 @@ class AdminTest extends JUnit3Suite with ZooKeeperTestHarness with Logging {
partitionsRemaining = controller.shutdownBroker(1)
assertEquals(0, partitionsRemaining.size)
activeServers = servers.filter(s => s.config.brokerId == 0)
partitionStateInfo = activeServers.head.apis.metadataCache(TopicAndPartition(topic, partition))
partitionStateInfo = activeServers.head.apis.metadataCache.getPartitionInfos(topic)(partition)
leaderAfterShutdown = partitionStateInfo.leaderIsrAndControllerEpoch.leaderAndIsr.leader
assertEquals(0, leaderAfterShutdown)
assertTrue(servers.foldLeft(true)(_ && _.apis.metadataCache(TopicAndPartition(topic, partition)).leaderIsrAndControllerEpoch.leaderAndIsr.leader == 0))
assertTrue(servers.foldLeft(true)(_ && _.apis.metadataCache.getPartitionInfos(topic)(partition).leaderIsrAndControllerEpoch.leaderAndIsr.leader == 0))
partitionsRemaining = controller.shutdownBroker(0)
assertEquals(1, partitionsRemaining.size)
// leader doesn't change since all the replicas are shut down
assertTrue(servers.foldLeft(true)(_ && _.apis.metadataCache(TopicAndPartition(topic, partition)).leaderIsrAndControllerEpoch.leaderAndIsr.leader == 0))
assertTrue(servers.foldLeft(true)(_ && _.apis.metadataCache.getPartitionInfos(topic)(partition).leaderIsrAndControllerEpoch.leaderAndIsr.leader == 0))
}
finally {
servers.foreach(_.shutdown())

75
core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala

@ -219,8 +219,10 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness { @@ -219,8 +219,10 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness {
val expectedReplicaAssignment = Map(0 -> List(0, 1, 2))
val topic = "test"
val topicAndPartition = TopicAndPartition(topic, 0)
val brokerConfigs = TestUtils.createBrokerConfigs(4)
brokerConfigs.foreach(p => p.setProperty("delete.topic.enable", "true"))
// create brokers
val allServers = TestUtils.createBrokerConfigs(4).map(b => TestUtils.createServer(new KafkaConfig(b)))
val allServers = brokerConfigs.map(b => TestUtils.createServer(new KafkaConfig(b)))
val servers = allServers.filter(s => expectedReplicaAssignment(0).contains(s.config.brokerId))
// create the topic
AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkClient, topic, expectedReplicaAssignment)
@ -259,8 +261,10 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness { @@ -259,8 +261,10 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness {
val expectedReplicaAssignment = Map(0 -> List(0, 1, 2))
val topic = "test"
val topicAndPartition = TopicAndPartition(topic, 0)
val brokerConfigs = TestUtils.createBrokerConfigs(4)
brokerConfigs.foreach(p => p.setProperty("delete.topic.enable", "true"))
// create brokers
val allServers = TestUtils.createBrokerConfigs(4).map(b => TestUtils.createServer(new KafkaConfig(b)))
val allServers = brokerConfigs.map(b => TestUtils.createServer(new KafkaConfig(b)))
val servers = allServers.filter(s => expectedReplicaAssignment(0).contains(s.config.brokerId))
// create the topic
AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkClient, topic, expectedReplicaAssignment)
@ -296,9 +300,8 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness { @@ -296,9 +300,8 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness {
def testDeleteTopicDuringAddPartition() {
val topic = "test"
val servers = createTestTopicAndCluster(topic)
// add partitions to topic
val topicAndPartition = TopicAndPartition(topic, 0)
val newPartition = TopicAndPartition(topic, 1)
// add partitions to topic
AdminUtils.addPartitions(zkClient, topic, 2, "0:1:2,0:1:2")
// start topic deletion
AdminUtils.deleteTopic(zkClient, topic)
@ -366,11 +369,73 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness { @@ -366,11 +369,73 @@ class DeleteTopicTest extends JUnit3Suite with ZooKeeperTestHarness {
servers.foreach(_.shutdown())
}
@Test
def testAutoCreateAfterDeleteTopic() {
val topicAndPartition = TopicAndPartition("test", 0)
val topic = topicAndPartition.topic
val servers = createTestTopicAndCluster(topic)
// start topic deletion
AdminUtils.deleteTopic(zkClient, topic)
verifyTopicDeletion(topic, servers)
// test if first produce request after topic deletion auto creates the topic
val props = new Properties()
props.put("metadata.broker.list", servers.map(s => s.config.hostName + ":" + s.config.port).mkString(","))
props.put("serializer.class", "kafka.serializer.StringEncoder")
props.put("producer.type", "sync")
props.put("request.required.acks", "1")
props.put("message.send.max.retries", "1")
val producerConfig = new ProducerConfig(props)
val producer = new Producer[String, String](producerConfig)
try{
producer.send(new KeyedMessage[String, String](topic, "test", "test1"))
} catch {
case e: FailedToSendMessageException => fail("Topic should have been auto created")
case oe: Throwable => fail("fails with exception", oe)
}
// test the topic path exists
assertTrue("Topic not auto created", ZkUtils.pathExists(zkClient, ZkUtils.getTopicPath(topic)))
// wait until leader is elected
val leaderIdOpt = TestUtils.waitUntilLeaderIsElectedOrChanged(zkClient, topic, 0, 1000)
assertTrue("New leader should be elected after re-creating topic test", leaderIdOpt.isDefined)
try {
producer.send(new KeyedMessage[String, String](topic, "test", "test1"))
} catch {
case e: FailedToSendMessageException => fail("Topic should have been auto created")
case oe: Throwable => fail("fails with exception", oe)
} finally {
producer.close()
}
servers.foreach(_.shutdown())
}
@Test
def testDeleteNonExistingTopic() {
val topicAndPartition = TopicAndPartition("test", 0)
val topic = topicAndPartition.topic
val servers = createTestTopicAndCluster(topic)
// start topic deletion
AdminUtils.deleteTopic(zkClient, "test2")
// verify delete topic path for test2 is removed from zookeeper
verifyTopicDeletion("test2", servers)
// verify that topic test is untouched
assertTrue("Replicas for topic test not created in 1000ms", TestUtils.waitUntilTrue(() => servers.foldLeft(true)((res, server) =>
res && server.getLogManager().getLog(topicAndPartition).isDefined), 1000))
// test the topic path exists
assertTrue("Topic test mistakenly deleted", ZkUtils.pathExists(zkClient, ZkUtils.getTopicPath(topic)))
// topic test should have a leader
val leaderIdOpt = TestUtils.waitUntilLeaderIsElectedOrChanged(zkClient, topic, 0, 1000)
assertTrue("Leader should exist for topic test", leaderIdOpt.isDefined)
servers.foreach(_.shutdown())
}
private def createTestTopicAndCluster(topic: String): Seq[KafkaServer] = {
val expectedReplicaAssignment = Map(0 -> List(0, 1, 2))
val topicAndPartition = TopicAndPartition(topic, 0)
val brokerConfigs = TestUtils.createBrokerConfigs(3)
brokerConfigs.foreach(p => p.setProperty("delete.topic.enable", "true"))
// create brokers
val servers = TestUtils.createBrokerConfigs(3).map(b => TestUtils.createServer(new KafkaConfig(b)))
val servers = brokerConfigs.map(b => TestUtils.createServer(new KafkaConfig(b)))
// create the topic
AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkClient, topic, expectedReplicaAssignment)
// wait until replica log is created on every broker

2
core/src/test/scala/unit/kafka/log/CleanerTest.scala

@ -33,7 +33,7 @@ import kafka.message._ @@ -33,7 +33,7 @@ import kafka.message._
class CleanerTest extends JUnitSuite {
val dir = TestUtils.tempDir()
val logConfig = LogConfig(segmentSize=1024, maxIndexSize=1024, dedupe=true)
val logConfig = LogConfig(segmentSize=1024, maxIndexSize=1024, compact=true)
val time = new MockTime()
val throttler = new Throttler(desiredRatePerSec = Double.MaxValue, checkIntervalMs = Long.MaxValue, time = time)

4
core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala

@ -92,7 +92,7 @@ class LogCleanerIntegrationTest extends JUnitSuite { @@ -92,7 +92,7 @@ class LogCleanerIntegrationTest extends JUnitSuite {
def makeCleaner(parts: Int,
minDirtyMessages: Int = 0,
numThreads: Int = 1,
defaultPolicy: String = "dedupe",
defaultPolicy: String = "compact",
policyOverrides: Map[String, String] = Map()): LogCleaner = {
// create partitions and add them to the pool
@ -101,7 +101,7 @@ class LogCleanerIntegrationTest extends JUnitSuite { @@ -101,7 +101,7 @@ class LogCleanerIntegrationTest extends JUnitSuite {
val dir = new File(logDir, "log-" + i)
dir.mkdirs()
val log = new Log(dir = dir,
LogConfig(segmentSize = segmentSize, maxIndexSize = 100*1024, fileDeleteDelayMs = deleteDelay, dedupe = true),
LogConfig(segmentSize = segmentSize, maxIndexSize = 100*1024, fileDeleteDelayMs = deleteDelay, compact = true),
recoveryPoint = 0L,
scheduler = time.scheduler,
time = time)

78
core/src/test/scala/unit/kafka/log/LogManagerTest.scala

@ -201,6 +201,7 @@ class LogManagerTest extends JUnit3Suite { @@ -201,6 +201,7 @@ class LogManagerTest extends JUnit3Suite {
/**
* Test that it is not possible to open two log managers using the same data directory
*/
@Test
def testTwoLogManagersUsingSameDirFails() {
try {
new LogManager(Array(logDir), Map(), logConfig, cleanerConfig, 1000L, 10000L, 1000L, time.scheduler, time)
@ -209,24 +210,75 @@ class LogManagerTest extends JUnit3Suite { @@ -209,24 +210,75 @@ class LogManagerTest extends JUnit3Suite {
case e: KafkaException => // this is good
}
}
/**
* Test that recovery points are correctly written out to disk
*/
@Test
def testCheckpointRecoveryPoints() {
val topicA = TopicAndPartition("test-a", 1)
val topicB = TopicAndPartition("test-b", 1)
val logA = this.logManager.createLog(topicA, logConfig)
val logB = this.logManager.createLog(topicB, logConfig)
for(i <- 0 until 50)
logA.append(TestUtils.singleMessageSet("test".getBytes()))
for(i <- 0 until 100)
logB.append(TestUtils.singleMessageSet("test".getBytes()))
logA.flush()
logB.flush()
verifyCheckpointRecovery(Seq(TopicAndPartition("test-a", 1), TopicAndPartition("test-b", 1)), logManager)
}
/**
* Test that recovery points directory checking works with trailing slash
*/
@Test
def testRecoveryDirectoryMappingWithTrailingSlash() {
logManager.shutdown()
logDir = TestUtils.tempDir()
logManager = new LogManager(logDirs = Array(new File(logDir.getAbsolutePath + File.separator)),
topicConfigs = Map(),
defaultConfig = logConfig,
cleanerConfig = cleanerConfig,
flushCheckMs = 1000L,
flushCheckpointMs = 100000L,
retentionCheckMs = 1000L,
scheduler = time.scheduler,
time = time)
logManager.startup
verifyCheckpointRecovery(Seq(TopicAndPartition("test-a", 1)), logManager)
}
/**
* Test that recovery points directory checking works with relative directory
*/
@Test
def testRecoveryDirectoryMappingWithRelativeDirectory() {
logManager.shutdown()
logDir = new File("data" + File.separator + logDir.getName)
logDir.mkdirs()
logDir.deleteOnExit()
logManager = new LogManager(logDirs = Array(logDir),
topicConfigs = Map(),
defaultConfig = logConfig,
cleanerConfig = cleanerConfig,
flushCheckMs = 1000L,
flushCheckpointMs = 100000L,
retentionCheckMs = 1000L,
scheduler = time.scheduler,
time = time)
logManager.startup
verifyCheckpointRecovery(Seq(TopicAndPartition("test-a", 1)), logManager)
}
private def verifyCheckpointRecovery(topicAndPartitions: Seq[TopicAndPartition],
logManager: LogManager) {
val logs = topicAndPartitions.map(this.logManager.createLog(_, logConfig))
logs.foreach(log => {
for(i <- 0 until 50)
log.append(TestUtils.singleMessageSet("test".getBytes()))
log.flush()
})
logManager.checkpointRecoveryPointOffsets()
val checkpoints = new OffsetCheckpoint(new File(logDir, logManager.RecoveryPointCheckpointFile)).read()
assertEquals("Recovery point should equal checkpoint", checkpoints(topicA), logA.recoveryPoint)
assertEquals("Recovery point should equal checkpoint", checkpoints(topicB), logB.recoveryPoint)
topicAndPartitions.zip(logs).foreach {
case(tp, log) => {
assertEquals("Recovery point should equal checkpoint", checkpoints(tp), log.recoveryPoint)
}
}
}
}

37
core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala

@ -6,32 +6,35 @@ @@ -6,32 +6,35 @@
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
*/
package kafka.log4j
import java.util.Properties
import java.io.File
import kafka.consumer.SimpleConsumer
import kafka.server.{KafkaConfig, KafkaServer}
import kafka.utils.{TestUtils, Utils, Logging}
import junit.framework.Assert._
import kafka.api.FetchRequestBuilder
import kafka.producer.async.MissingConfigException
import kafka.serializer.Encoder
import kafka.zk.ZooKeeperTestHarness
import java.util.Properties
import java.io.File
import org.apache.log4j.spi.LoggingEvent
import org.apache.log4j.{PropertyConfigurator, Logger}
import org.junit.{After, Before, Test}
import org.scalatest.junit.JUnit3Suite
import junit.framework.Assert._
class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with Logging {
var logDirZk: File = null
@ -72,8 +75,8 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with @@ -72,8 +75,8 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with
var props = new Properties()
props.put("log4j.rootLogger", "INFO")
props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.layout", "org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern", "%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.Topic", "test-topic")
props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
@ -82,15 +85,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with @@ -82,15 +85,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with
try {
PropertyConfigurator.configure(props)
fail("Missing properties exception was expected !")
}catch {
} catch {
case e: MissingConfigException =>
}
props = new Properties()
props.put("log4j.rootLogger", "INFO")
props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.layout", "org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern", "%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.Topic", "test-topic")
props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
@ -99,15 +102,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with @@ -99,15 +102,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with
try {
PropertyConfigurator.configure(props)
fail("Missing properties exception was expected !")
}catch {
} catch {
case e: MissingConfigException =>
}
props = new Properties()
props.put("log4j.rootLogger", "INFO")
props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.layout", "org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern", "%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.SerializerClass", "kafka.log4j.AppenderStringEncoder")
props.put("log4j.appender.KAFKA.brokerList", TestUtils.getBrokerListStrFromConfigs(Seq(config)))
props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
@ -116,15 +119,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with @@ -116,15 +119,15 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with
try {
PropertyConfigurator.configure(props)
fail("Missing properties exception was expected !")
}catch {
} catch {
case e: MissingConfigException =>
}
props = new Properties()
props.put("log4j.rootLogger", "INFO")
props.put("log4j.appender.KAFKA", "kafka.producer.KafkaLog4jAppender")
props.put("log4j.appender.KAFKA.layout","org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern","%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.layout", "org.apache.log4j.PatternLayout")
props.put("log4j.appender.KAFKA.layout.ConversionPattern", "%-5p: %c - %m%n")
props.put("log4j.appender.KAFKA.brokerList", TestUtils.getBrokerListStrFromConfigs(Seq(config)))
props.put("log4j.appender.KAFKA.Topic", "test-topic")
props.put("log4j.logger.kafka.log4j", "INFO, KAFKA")
@ -132,7 +135,7 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with @@ -132,7 +135,7 @@ class KafkaLog4jAppenderTest extends JUnit3Suite with ZooKeeperTestHarness with
// serializer missing
try {
PropertyConfigurator.configure(props)
}catch {
} catch {
case e: MissingConfigException => fail("should default to kafka.serializer.StringEncoder")
}
}

51
core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala

@ -0,0 +1,51 @@ @@ -0,0 +1,51 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.server
import junit.framework.Assert._
import java.util.Properties
import java.io.File
import org.junit.{After, Before, Test}
import kafka.integration.KafkaServerTestHarness
import kafka.utils._
import kafka.common._
import kafka.log.LogConfig
import kafka.admin.AdminUtils
import org.scalatest.junit.JUnit3Suite
class DynamicConfigChangeTest extends JUnit3Suite with KafkaServerTestHarness {
override val configs = List(new KafkaConfig(TestUtils.createBrokerConfig(0, TestUtils.choosePort)))
@Test
def testConfigChange() {
val oldVal = 100000
val newVal = 200000
val tp = TopicAndPartition("test", 0)
AdminUtils.createTopic(zkClient, tp.topic, 1, 1, LogConfig(flushInterval = oldVal).toProps)
TestUtils.retry(10000) {
val logOpt = this.servers(0).logManager.getLog(tp)
assertTrue(logOpt.isDefined)
assertEquals(oldVal, logOpt.get.config.flushInterval)
}
AdminUtils.changeTopicConfig(zkClient, tp.topic, LogConfig(flushInterval = newVal).toProps)
TestUtils.retry(10000) {
assertEquals(newVal, this.servers(0).logManager.getLog(tp).get.config.flushInterval)
}
}
}

2
core/src/test/scala/unit/kafka/server/HighwatermarkPersistenceTest.scala

@ -143,7 +143,7 @@ class HighwatermarkPersistenceTest extends JUnit3Suite { @@ -143,7 +143,7 @@ class HighwatermarkPersistenceTest extends JUnit3Suite {
}
def hwmFor(replicaManager: ReplicaManager, topic: String, partition: Int): Long = {
replicaManager.highWatermarkCheckpoints(replicaManager.config.logDirs(0)).read.getOrElse(TopicAndPartition(topic, partition), 0L)
replicaManager.highWatermarkCheckpoints(new File(replicaManager.config.logDirs(0)).getAbsolutePath).read.getOrElse(TopicAndPartition(topic, partition), 0L)
}
}

75
core/src/test/scala/unit/kafka/server/ReplicaManagerTest.scala

@ -0,0 +1,75 @@ @@ -0,0 +1,75 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package kafka.server
import kafka.utils.{MockScheduler, MockTime, TestUtils}
import kafka.log.{CleanerConfig, LogManager, LogConfig}
import java.util.concurrent.atomic.AtomicBoolean
import java.io.File
import org.easymock.EasyMock
import org.I0Itec.zkclient.ZkClient
import org.scalatest.junit.JUnit3Suite
import org.junit.Test
class ReplicaManagerTest extends JUnit3Suite {
val topic = "test-topic"
@Test
def testHighWaterMarkDirectoryMapping() {
val props = TestUtils.createBrokerConfig(1)
val config = new KafkaConfig(props)
val zkClient = EasyMock.createMock(classOf[ZkClient])
val mockLogMgr = createLogManager(config.logDirs.map(new File(_)).toArray)
val time: MockTime = new MockTime()
val rm = new ReplicaManager(config, time, zkClient, new MockScheduler(time), mockLogMgr, new AtomicBoolean(false))
val partition = rm.getOrCreatePartition(topic, 1, 1)
partition.getOrCreateReplica(1)
rm.checkpointHighWatermarks()
}
@Test
def testHighwaterMarkRelativeDirectoryMapping() {
val props = TestUtils.createBrokerConfig(1)
props.put("log.dir", TestUtils.tempRelativeDir("data").getAbsolutePath)
val config = new KafkaConfig(props)
val zkClient = EasyMock.createMock(classOf[ZkClient])
val mockLogMgr = createLogManager(config.logDirs.map(new File(_)).toArray)
val time: MockTime = new MockTime()
val rm = new ReplicaManager(config, time, zkClient, new MockScheduler(time), mockLogMgr, new AtomicBoolean(false))
val partition = rm.getOrCreatePartition(topic, 1, 1)
partition.getOrCreateReplica(1)
rm.checkpointHighWatermarks()
}
private def createLogManager(logDirs: Array[File]): LogManager = {
val time = new MockTime()
return new LogManager(logDirs,
topicConfigs = Map(),
defaultConfig = new LogConfig(),
cleanerConfig = CleanerConfig(enableCleaner = false),
flushCheckMs = 1000L,
flushCheckpointMs = 100000L,
retentionCheckMs = 1000L,
scheduler = time.scheduler,
time = time)
}
}

20
core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala

@ -96,5 +96,25 @@ class ServerShutdownTest extends JUnit3Suite with ZooKeeperTestHarness { @@ -96,5 +96,25 @@ class ServerShutdownTest extends JUnit3Suite with ZooKeeperTestHarness {
producer.close()
server.shutdown()
Utils.rm(server.config.logDirs)
verifyNonDaemonThreadsStatus
}
@Test
def testCleanShutdownWithDeleteTopicEnabled() {
val newProps = TestUtils.createBrokerConfig(0, port)
newProps.setProperty("delete.topic.enable", "true")
val newConfig = new KafkaConfig(newProps)
var server = new KafkaServer(newConfig)
server.startup()
server.shutdown()
server.awaitShutdown()
Utils.rm(server.config.logDirs)
verifyNonDaemonThreadsStatus
}
def verifyNonDaemonThreadsStatus() {
assertEquals(0, Thread.getAllStackTraces.keySet().toArray
.map(_.asInstanceOf[Thread])
.count(t => !t.isDaemon && t.isAlive && t.getClass.getCanonicalName.toLowerCase.startsWith("kafka")))
}
}

4
core/src/test/scala/unit/kafka/server/SimpleFetchTest.scala

@ -93,7 +93,7 @@ class SimpleFetchTest extends JUnit3Suite { @@ -93,7 +93,7 @@ class SimpleFetchTest extends JUnit3Suite {
val requestChannel = new RequestChannel(2, 5)
val apis = new KafkaApis(requestChannel, replicaManager, zkClient, configs.head.brokerId, configs.head, controller)
val partitionStateInfo = EasyMock.createNiceMock(classOf[PartitionStateInfo])
apis.metadataCache.put(TopicAndPartition(topic, partitionId), partitionStateInfo)
apis.metadataCache.addPartitionInfo(topic, partitionId, partitionStateInfo)
EasyMock.replay(partitionStateInfo)
// This request (from a follower) wants to read up to 2*HW but should only get back up to HW bytes into the log
val goodFetch = new FetchRequestBuilder()
@ -164,7 +164,7 @@ class SimpleFetchTest extends JUnit3Suite { @@ -164,7 +164,7 @@ class SimpleFetchTest extends JUnit3Suite {
val requestChannel = new RequestChannel(2, 5)
val apis = new KafkaApis(requestChannel, replicaManager, zkClient, configs.head.brokerId, configs.head, controller)
val partitionStateInfo = EasyMock.createNiceMock(classOf[PartitionStateInfo])
apis.metadataCache.put(TopicAndPartition(topic, partitionId), partitionStateInfo)
apis.metadataCache.addPartitionInfo(topic, partitionId, partitionStateInfo)
EasyMock.replay(partitionStateInfo)
/**

12
core/src/test/scala/unit/kafka/utils/TestUtils.scala

@ -84,6 +84,16 @@ object TestUtils extends Logging { @@ -84,6 +84,16 @@ object TestUtils extends Logging {
f
}
/**
* Create a temporary relative directory
*/
def tempRelativeDir(parent: String): File = {
val f = new File(parent, "kafka-" + random.nextInt(1000000))
f.mkdirs()
f.deleteOnExit()
f
}
/**
* Create a temporary file
*/
@ -513,7 +523,7 @@ object TestUtils extends Logging { @@ -513,7 +523,7 @@ object TestUtils extends Logging {
def waitUntilMetadataIsPropagated(servers: Seq[KafkaServer], topic: String, partition: Int, timeout: Long) = {
Assert.assertTrue("Partition [%s,%d] metadata not propagated after timeout".format(topic, partition),
TestUtils.waitUntilTrue(() =>
servers.foldLeft(true)(_ && _.apis.metadataCache.keySet.contains(TopicAndPartition(topic, partition))), timeout))
servers.foldLeft(true)(_ && _.apis.metadataCache.containsTopicAndPartition(topic, partition)), timeout))
}
def writeNonsenseToFile(fileName: File, position: Long, size: Int) {

3
examples/build.sbt

@ -1,3 +0,0 @@ @@ -1,3 +0,0 @@
name := "kafka-java-examples"
crossPaths := false

9
gradle.properties

@ -14,11 +14,10 @@ @@ -14,11 +14,10 @@
# limitations under the License.
group=org.apache.kafka
version=0.8.1
version=0.8.1.1-SNAPSHOT
scalaVersion=2.8.0
task=build
#mavenUrl=file://localhost/tmp/maven
mavenUrl=http://your.maven.repository
mavenUsername=your.username
mavenPassword=your.password
mavenUrl=
mavenUsername=
mavenPassword=

BIN
lib/sbt-launch.jar

Binary file not shown.

1
perf/build.sbt

@ -1 +0,0 @@ @@ -1 +0,0 @@
name := "kafka-perf"

152
project/Build.scala

@ -1,152 +0,0 @@ @@ -1,152 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import sbt._
import Keys._
import Process._
import scala.xml.{Node, Elem}
import scala.xml.transform.{RewriteRule, RuleTransformer}
object KafkaBuild extends Build {
val buildNumber = SettingKey[String]("build-number", "Build number defaults to $BUILD_NUMBER environment variable")
val releaseName = SettingKey[String]("release-name", "the full name of this release")
val commonSettings = Seq(
organization := "org.apache.kafka",
pomExtra :=
<parent>
<groupId>org.apache</groupId>
<artifactId>apache</artifactId>
<version>10</version>
</parent>
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>,
scalacOptions ++= Seq("-deprecation", "-unchecked", "-g:none"),
crossScalaVersions := Seq("2.8.0","2.8.2", "2.9.1", "2.9.2", "2.10.1"),
excludeFilter in unmanagedSources <<= scalaVersion(v => if (v.startsWith("2.8")) "*_2.9+.scala" else "*_2.8.scala"),
scalaVersion := "2.8.0",
version := "0.8.1",
publishTo := Some("Apache Maven Repo" at "https://repository.apache.org/service/local/staging/deploy/maven2"),
credentials += Credentials(Path.userHome / ".m2" / ".credentials"),
buildNumber := System.getProperty("build.number", ""),
version <<= (buildNumber, version) { (build, version) => if (build == "") version else version + "+" + build},
releaseName <<= (name, version, scalaVersion) {(name, version, scalaVersion) => name + "_" + scalaVersion + "-" + version},
javacOptions in compile ++= Seq("-Xlint:unchecked", "-source", "1.5"),
javacOptions in doc ++= Seq("-source", "1.5"),
parallelExecution in Test := false, // Prevent tests from overrunning each other
publishArtifact in Test := true,
libraryDependencies ++= Seq(
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms"),
"net.sf.jopt-simple" % "jopt-simple" % "3.2",
"org.slf4j" % "slf4j-simple" % "1.6.4"
),
// The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
// some dependencies on various sun and javax packages.
ivyXML := <dependencies>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
<dependency org="org.apache.zookeeper" name="zookeeper" rev="3.3.4">
<exclude org="log4j" module="log4j"/>
<exclude org="jline" module="jline"/>
</dependency>
</dependencies>,
mappings in packageBin in Compile += file("LICENSE") -> "LICENSE",
mappings in packageBin in Compile += file("NOTICE") -> "NOTICE"
)
val hadoopSettings = Seq(
javacOptions in compile ++= Seq("-Xlint:deprecation"),
libraryDependencies ++= Seq(
"org.apache.avro" % "avro" % "1.4.0",
"org.apache.pig" % "pig" % "0.8.0",
"commons-logging" % "commons-logging" % "1.0.4",
"org.codehaus.jackson" % "jackson-core-asl" % "1.5.5",
"org.codehaus.jackson" % "jackson-mapper-asl" % "1.5.5",
"org.apache.hadoop" % "hadoop-core" % "0.20.2"
),
ivyXML :=
<dependencies>
<exclude module="netty"/>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
<dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2">
<exclude org="junit" module="junit"/>
</dependency>
<dependency org="org.apache.pig" name="pig" rev="0.8.0">
<exclude org="junit" module="junit"/>
</dependency>
</dependencies>
)
val runRat = TaskKey[Unit]("run-rat-task", "Runs Apache rat on Kafka")
val runRatTask = runRat := {
"bin/run-rat.sh" !
}
val release = TaskKey[Unit]("release", "Creates a deployable release directory file with dependencies, config, and scripts.")
val releaseTask = release <<= ( packageBin in (core, Compile), dependencyClasspath in (core, Runtime), exportedProducts in Compile,
target, releaseName in core ) map { (packageBin, deps, products, target, releaseName) =>
val jarFiles = deps.files.filter(f => !products.files.contains(f) && f.getName.endsWith(".jar"))
val destination = target / "RELEASE" / releaseName
IO.copyFile(packageBin, destination / packageBin.getName)
IO.copyFile(file("LICENSE"), destination / "LICENSE")
IO.copyFile(file("NOTICE"), destination / "NOTICE")
IO.copy(jarFiles.map { f => (f, destination / "libs" / f.getName) })
IO.copyDirectory(file("config"), destination / "config")
IO.copyDirectory(file("bin"), destination / "bin")
for {file <- (destination / "bin").listFiles} { file.setExecutable(true, true) }
}
val releaseZip = TaskKey[Unit]("release-zip", "Creates a deployable zip file with dependencies, config, and scripts.")
val releaseZipTask = releaseZip <<= (release, target, releaseName in core) map { (release, target, releaseName) =>
val zipPath = target / "RELEASE" / "%s.zip".format(releaseName)
IO.delete(zipPath)
IO.zip((target/"RELEASE" ** releaseName ***) x relativeTo(target/"RELEASE"), zipPath)
}
val releaseTar = TaskKey[Unit]("release-tar", "Creates a deployable tar.gz file with dependencies, config, and scripts.")
val releaseTarTask = releaseTar <<= ( release, target, releaseName in core) map { (release, target, releaseName) =>
Process(Seq("tar", "czf", "%s.tar.gz".format(releaseName), releaseName), target / "RELEASE").! match {
case 0 => ()
case n => sys.error("Failed to run native tar application!")
}
}
lazy val kafka = Project(id = "Kafka", base = file(".")).aggregate(core, examples, contrib, perf).settings((commonSettings ++
runRatTask ++ releaseTask ++ releaseZipTask ++ releaseTarTask): _*)
lazy val core = Project(id = "core", base = file("core")).settings(commonSettings: _*)
lazy val examples = Project(id = "java-examples", base = file("examples")).settings(commonSettings :_*) dependsOn (core)
lazy val perf = Project(id = "perf", base = file("perf")).settings((Seq(name := "kafka-perf") ++ commonSettings):_*) dependsOn (core)
lazy val contrib = Project(id = "contrib", base = file("contrib")).aggregate(hadoopProducer, hadoopConsumer).settings(commonSettings :_*)
lazy val hadoopProducer = Project(id = "hadoop-producer", base = file("contrib/hadoop-producer")).settings(hadoopSettings ++ commonSettings: _*) dependsOn (core)
lazy val hadoopConsumer = Project(id = "hadoop-consumer", base = file("contrib/hadoop-consumer")).settings(hadoopSettings ++ commonSettings: _*) dependsOn (core)
lazy val clients = Project(id = "kafka-clients", base = file("clients"))
}

17
project/build.properties

@ -1,17 +0,0 @@ @@ -1,17 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#Project properties
#Mon Feb 28 11:55:49 PST 2011
sbt.version=0.12.1

251
project/build/KafkaProject.scala

@ -1,251 +0,0 @@ @@ -1,251 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import sbt._
import scala.xml.{Node, Elem}
import scala.xml.transform.{RewriteRule, RuleTransformer}
class KafkaProject(info: ProjectInfo) extends ParentProject(info) with IdeaProject {
override def managedStyle = ManagedStyle.Maven
val publishTo = "Maven Repo" at "http://maven/content/repositories/repository.snapshots"
Credentials(Path.userHome / ".m2" / ".credentials", log)
lazy val core = project("core", "core-kafka", new CoreKafkaProject(_))
lazy val examples = project("examples", "java-examples", new KafkaExamplesProject(_), core)
lazy val contrib = project("contrib", "contrib", new ContribProject(_))
lazy val perf = project("perf", "perf", new KafkaPerfProject(_))
lazy val releaseZipTask = core.packageDistTask
val releaseZipDescription = "Compiles every sub project, runs unit tests, creates a deployable release zip file with dependencies, config, and scripts."
lazy val releaseZip = releaseZipTask dependsOn(core.corePackageAction, core.test, examples.examplesPackageAction,
contrib.producerPackageAction, contrib.consumerPackageAction) describedAs releaseZipDescription
val runRatDescription = "Runs Apache rat on Kafka"
lazy val runRatTask = task {
Runtime.getRuntime().exec("bin/run-rat.sh")
None
} describedAs runRatDescription
val rat = "org.apache.rat" % "apache-rat" % "0.8"
class CoreKafkaProject(info: ProjectInfo) extends DefaultProject(info)
with IdeaProject with CoreDependencies with TestDependencies with CompressionDependencies {
val corePackageAction = packageAllAction
//The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
// some dependencies on various sun and javax packages.
override def ivyXML =
<dependencies>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
<dependency org="org.apache.zookeeper" name="zookeeper" rev="3.3.4">
<exclude module="log4j"/>
<exclude module="jline"/>
</dependency>
</dependencies>
override def organization = "org.apache"
override def filterScalaJars = false
// build the executable jar's classpath.
// (why is it necessary to explicitly remove the target/{classes,resources} paths? hm.)
def dependentJars = {
val jars =
publicClasspath +++ mainDependencies.scalaJars --- mainCompilePath --- mainResourcesOutputPath
if (jars.get.find { jar => jar.name.startsWith("scala-library-") }.isDefined) {
// workaround bug in sbt: if the compiler is explicitly included, don't include 2 versions
// of the library.
jars --- jars.filter { jar =>
jar.absolutePath.contains("/boot/") && jar.name == "scala-library.jar"
}
} else {
jars
}
}
def dependentJarNames = dependentJars.getFiles.map(_.getName).filter(_.endsWith(".jar"))
override def manifestClassPath = Some(dependentJarNames.map { "libs/" + _ }.mkString(" "))
def distName = (artifactID + "-" + projectVersion.value)
def distPath = "dist" / distName ##
def configPath = "config" ##
def configOutputPath = distPath / "config"
def binPath = "bin" ##
def binOutputPath = distPath / "bin"
def distZipName = {
"%s-%s.zip".format(artifactID, projectVersion.value)
}
lazy val packageDistTask = task {
distPath.asFile.mkdirs()
(distPath / "libs").asFile.mkdirs()
binOutputPath.asFile.mkdirs()
configOutputPath.asFile.mkdirs()
FileUtilities.copyFlat(List(jarPath), distPath, log).left.toOption orElse
FileUtilities.copyFlat(dependentJars.get, distPath / "libs", log).left.toOption orElse
FileUtilities.copy((configPath ***).get, configOutputPath, log).left.toOption orElse
FileUtilities.copy((binPath ***).get, binOutputPath, log).left.toOption orElse
FileUtilities.zip((("dist" / distName) ##).get, "dist" / distZipName, true, log)
None
}
val PackageDistDescription = "Creates a deployable zip file with dependencies, config, and scripts."
lazy val packageDist = packageDistTask dependsOn(`package`, `test`) describedAs PackageDistDescription
val cleanDist = cleanTask("dist" ##) describedAs("Erase any packaged distributions.")
override def cleanAction = super.cleanAction dependsOn(cleanDist)
override def javaCompileOptions = super.javaCompileOptions ++
List(JavaCompileOption("-source"), JavaCompileOption("1.5"))
override def packageAction = super.packageAction dependsOn (testCompileAction, packageTestAction)
}
class KafkaPerfProject(info: ProjectInfo) extends DefaultProject(info)
with IdeaProject
with CoreDependencies {
val perfPackageAction = packageAllAction
val dependsOnCore = core
//The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
// some dependencies on various sun and javax packages.
override def ivyXML =
<dependencies>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
</dependencies>
override def artifactID = "kafka-perf"
override def filterScalaJars = false
override def javaCompileOptions = super.javaCompileOptions ++
List(JavaCompileOption("-Xlint:unchecked"))
}
class KafkaExamplesProject(info: ProjectInfo) extends DefaultProject(info)
with IdeaProject
with CoreDependencies {
val examplesPackageAction = packageAllAction
val dependsOnCore = core
//The issue is going from log4j 1.2.14 to 1.2.15, the developers added some features which required
// some dependencies on various sun and javax packages.
override def ivyXML =
<dependencies>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
</dependencies>
override def artifactID = "kafka-java-examples"
override def filterScalaJars = false
override def javaCompileOptions = super.javaCompileOptions ++
List(JavaCompileOption("-Xlint:unchecked"))
}
class ContribProject(info: ProjectInfo) extends ParentProject(info) with IdeaProject {
lazy val hadoopProducer = project("hadoop-producer", "hadoop producer",
new HadoopProducerProject(_), core)
lazy val hadoopConsumer = project("hadoop-consumer", "hadoop consumer",
new HadoopConsumerProject(_), core)
val producerPackageAction = hadoopProducer.producerPackageAction
val consumerPackageAction = hadoopConsumer.consumerPackageAction
class HadoopProducerProject(info: ProjectInfo) extends DefaultProject(info)
with IdeaProject
with CoreDependencies with HadoopDependencies {
val producerPackageAction = packageAllAction
override def ivyXML =
<dependencies>
<exclude module="netty"/>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
<dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2">
<exclude module="junit"/>
</dependency>
<dependency org="org.apache.pig" name="pig" rev="0.10.0">
<exclude module="junit"/>
</dependency>
</dependencies>
}
class HadoopConsumerProject(info: ProjectInfo) extends DefaultProject(info)
with IdeaProject
with CoreDependencies {
val consumerPackageAction = packageAllAction
override def ivyXML =
<dependencies>
<exclude module="netty"/>
<exclude module="javax"/>
<exclude module="jmxri"/>
<exclude module="jmxtools"/>
<exclude module="mail"/>
<exclude module="jms"/>
<exclude module=""/>
<dependency org="org.apache.hadoop" name="hadoop-core" rev="0.20.2">
<exclude module="junit"/>
</dependency>
<dependency org="org.apache.pig" name="pig" rev="0.8.0">
<exclude module="junit"/>
</dependency>
</dependencies>
val jodaTime = "joda-time" % "joda-time" % "1.6"
}
}
trait TestDependencies {
val easymock = "org.easymock" % "easymock" % "3.0" % "test"
val junit = "junit" % "junit" % "4.1" % "test"
val scalaTest = "org.scalatest" % "scalatest" % "1.2" % "test"
}
trait CoreDependencies {
val log4j = "log4j" % "log4j" % "1.2.15"
val jopt = "net.sf.jopt-simple" % "jopt-simple" % "3.2"
val slf4jSimple = "org.slf4j" % "slf4j-simple" % "1.6.4"
}
trait HadoopDependencies {
val avro = "org.apache.avro" % "avro" % "1.4.0"
val commonsLogging = "commons-logging" % "commons-logging" % "1.0.4"
val jacksonCore = "org.codehaus.jackson" % "jackson-core-asl" % "1.5.5"
val jacksonMapper = "org.codehaus.jackson" % "jackson-mapper-asl" % "1.5.5"
val hadoop = "org.apache.hadoop" % "hadoop-core" % "0.20.2"
}
trait CompressionDependencies {
val snappy = "org.xerial.snappy" % "snappy-java" % "1.0.5"
}
}

9
project/plugins.sbt

@ -1,9 +0,0 @@ @@ -1,9 +0,0 @@
resolvers += Resolver.url("artifactory", url("http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases"))(Resolver.ivyStylePatterns)
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.8.8")
addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.2.0")
resolvers += Resolver.url("sbt-plugin-releases", new URL("http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases/"))(Resolver.ivyStylePatterns)
addSbtPlugin("com.jsuereth" % "xsbt-gpg-plugin" % "0.6")

16
sbt

@ -1,16 +0,0 @@ @@ -1,16 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
java -Xmx1024M -XX:MaxPermSize=512m -Dbuild.number="$BUILD_NUMBER" -jar `dirname $0`/lib/sbt-launch.jar "$@"

17
sbt.bat

@ -1,17 +0,0 @@ @@ -1,17 +0,0 @@
@echo off
rem Licensed to the Apache Software Foundation (ASF) under one or more
rem contributor license agreements. See the NOTICE file distributed with
rem this work for additional information regarding copyright ownership.
rem The ASF licenses this file to You under the Apache License, Version 2.0
rem (the "License"); you may not use this file except in compliance with
rem the License. You may obtain a copy of the License at
rem
rem http://www.apache.org/licenses/LICENSE-2.0
rem
rem Unless required by applicable law or agreed to in writing, software
rem distributed under the License is distributed on an "AS IS" BASIS,
rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
rem See the License for the specific language governing permissions and
rem limitations under the License.
java -Xmx1024M -XX:MaxPermSize=512m -jar lib\sbt-launch.jar "%1"

5
scala.gradle

@ -0,0 +1,5 @@ @@ -0,0 +1,5 @@
if (!hasProperty('scalaVersion')) {
ext.scalaVersion = '2.8.0'
}
ext.defaultScalaVersion = '2.8.0'
ext.baseScalaVersion = (scalaVersion.startsWith('2.10')) ? '2.10' : scalaVersion

1
settings.gradle

@ -13,4 +13,5 @@ @@ -13,4 +13,5 @@
// See the License for the specific language governing permissions and
// limitations under the License.
apply from: file('scala.gradle')
include 'core', 'perf', 'contrib:hadoop-consumer', 'contrib:hadoop-producer', 'examples', 'clients'

Loading…
Cancel
Save