Documentation
Kafka 0.10.0 Documentation
Prior releases:
0.7.x
,
0.8.0
,
0.8.1.X
,
0.8.2.X
,
0.9.0.X
.
1. Getting Started
1.1 Introduction
1.2 Use Cases
1.3 Quick Start
1.4 Ecosystem
1.5 Upgrading
2. APIs
2.1 Producer API
2.2 Consumer API
2.3 Streams API
2.4 Connect API
2.5 Legacy APIs
3. Configuration
3.1 Broker Configs
3.2 Producer Configs
3.3 Consumer Configs
3.3.1 New Consumer Configs
3.3.2 Old Consumer Configs
3.4 Kafka Connect Configs
3.5 Kafka Streams Configs
4. Design
4.1 Motivation
4.2 Persistence
4.3 Efficiency
4.4 The Producer
4.5 The Consumer
4.6 Message Delivery Semantics
4.7 Replication
4.8 Log Compaction
4.9 Quotas
5. Implementation
5.1 API Design
5.2 Network Layer
5.3 Messages
5.4 Message format
5.5 Log
5.6 Distribution
6. Operations
6.1 Basic Kafka Operations
Adding and removing topics
Modifying topics
Graceful shutdown
Balancing leadership
Checking consumer position
Mirroring data between clusters
Expanding your cluster
Decommissioning brokers
Increasing replication factor
6.2 Datacenters
6.3 Important Configs
Important Client Configs
A Production Server Configs
6.4 Java Version
6.5 Hardware and OS
OS
Disks and Filesystems
Application vs OS Flush Management
Linux Flush Behavior
Ext4 Notes
6.6 Monitoring
6.7 ZooKeeper
Stable Version
Operationalization
7. Security
7.1 Security Overview
7.2 Encryption and Authentication using SSL
7.3 Authentication using SASL
7.4 Authorization and ACLs
7.5 Incorporating Security Features in a Running Cluster
7.6 ZooKeeper Authentication
New Clusters
Migrating Clusters
Migrating the ZooKeeper Ensemble
8. Kafka Connect
8.1 Overview
8.2 User Guide
8.3 Connector Development Guide
9. Kafka Streams
9.1 Overview
9.2 Developer Guide
Core Concepts
Low-Level Processor API
High-Level Streams DSL
1. Getting Started
1.1 Introduction
1.2 Use Cases
1.3 Quick Start
1.4 Ecosystem
1.5 Upgrading From Previous Versions
2. APIs
3. Configuration
4. Design
5. Implementation
6. Operations
7. Security
8. Kafka Connect
9. Kafka Streams