Just another Apache Kafka docker image
Running it using its internal zookeeper server
docker pull christiangda/kafka
docker run --tty --interactive --rm --name kafka \
--publish 9092:9092 \
--publish 2181:2181 \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
Special first argument WITH_INTERNAL_ZOOKEEPER
is necessary to start internal Zookeeper’s Server server, this configuration is not recommended for production environments!.
For production environments, you could use Zookeeper’s OFFICIAL REPOSITORY
For advanced configurations continue reading the documentation!.
Work in Progress (WIP)!
This is a docker’s image available in different Java, Scala and Kafka versions, thanks to this you can select your best flavor for the most appropriate environment.
There are many others good jobs arround around Apache Kafka docker’s image, but it allows you to use the Apache kafka examples in the same way that you see in its page.
Depending on Java, scala and Kafka version, you have many options to select
The most important value of this docker image is to be able to pass any configuration parameter as a special environment variable notation.
for examples, if you want to modify or pass the configuration parameter broker.id
and compression.type
you only need to run your images like:
docker run --tty --interactive --rm --name kafka \
--publish 9092:9092 \
--publish 2181:2181 \
--env SERVER__BROKER_ID=1 \
--env SERVER__COMPRESSION_TYPE=gzip \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
If additional you have an external zookeeper server called zk-01, then you need to pass zookeeper.connect
also
docker run --tty --interactive --rm \
--name kafka-01 \
--publish 9092:9092 \
--env SERVER__ZOOKEEPER_CONNECT=zk-01 \
--env SERVER__BROKER_ID=1 \
--env SERVER__COMPRESSION_TYPE=gzip \
--link zk-01 \
christiangda/kafka bin/kafka-server-start.sh config/server.properties
The provisioning script is encharge to convert:
–env VAR | Converted to | inside config/server.properties |
---|---|---|
SERVER__ZOOKEEPER_CONNECT=zk-01 | –> | zookeeper.connect=zk-01 |
SERVER__BROKER_ID=1 | –> | broker.id=1 |
SERVER__COMPRESSION_TYPE=gzip | –> | compression.type=gzip |
… | –> | … |
SERVER__SOME_KEY=some value | –> | some.key=some value |
basically the rule is:
put the prefix SERVER__
and then get the configuration parameter name and replace .
by _
and put all letters in UPPER
By default kafka has export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
, if you want to change it pass to the docker env
--env KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
if you want to change
--env EXTRA_ARGS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=9999"
Download the image
docker pull christiangda/kafka
docker run --tty --interactive --rm \
--name kafka \
--publish 9092:9092 \
--publish 2181:2181 \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
See the docker compose file here
See the procedure with Docker machine here
docker run --tty --interactive --rm \
--name kafka \
--publish 9092:9092 \
--publish 2181:2181 \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
Using the internal zookeeper daemon
docker run --tty --interactive --rm \
--name kafka \
--publish 9092:9092 \
--publish 2181:2181 \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
docker run --tty --interactive --rm \
--name client \
--link kafka \
christiangda/kafka bin/kafka-topics.sh --create --zookeeper kafka:2181 --replication-factor 1 --partitions 1 --topic test
docker run --tty --interactive --rm \
--name client \
--link kafka \
christiangda/kafka bin/kafka-topics.sh --list --zookeeper kafka:2181
docker run --tty --interactive --rm \
--name client \
--link kafka \
christiangda/kafka bin/kafka-console-producer.sh --broker-list kafka:9092 --topic test
docker run --tty --interactive --rm \
--name client2 \
--link kafka \
christiangda/kafka bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic test --from-beginning
Multi-Broker
docker run --tty --interactive --rm \
--name zk-01 \
--publish 2181:2181 \
zookeeper
docker run --tty --interactive --rm \
--name kafka-01 \
--publish 9092:9092 \
--env SERVER__ZOOKEEPER_CONNECT=zk-01 \
--env SERVER__BROKER_ID=1 \
--link zk-01 \
christiangda/kafka bin/kafka-server-start.sh config/server.properties
docker run --tty --interactive --rm \
--name kafka-02 \
--publish 9093:9092 \
--env SERVER__ZOOKEEPER_CONNECT=zk-01 \
--env SERVER__BROKER_ID=2 \
--link zk-01 \
--link kafka-01 \
christiangda/kafka bin/kafka-server-start.sh config/server.properties
docker run --tty --interactive --rm \
--name kafka-03 \
--publish 9094:9092 \
--env SERVER__ZOOKEEPER_CONNECT=zk-01 \
--env SERVER__BROKER_ID=3 \
--link zk-01 \
--link kafka-01 \
--link kafka-02 \
christiangda/kafka bin/kafka-server-start.sh config/server.properties
docker run --tty --interactive --rm \
--name client \
--link zk-01 \
christiangda/kafka bin/kafka-topics.sh --create --zookeeper zk-01:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
docker run --tty --interactive --rm \
--name client \
--link zk-01 \
christiangda/kafka bin/kafka-topics.sh --describe --zookeeper zk-01:2181 --topic my-replicated-topic
Default configuration
docker run -t -i --rm -e "SERVER__LOG_RETENTION_BYTES=20148" christiangda/kafka cat config/server.properties
docker run -t -i --rm -p 9092:9092 christiangda/kafka bin/kafka-server-start.sh config/server.properties
When you want to check if your env var finish in the config file
docker run -t -i --rm -p 9092:9092 -e "SERVER__LOG_RETENTION_BYTES=20148" christiangda/kafka cat config/server.properties
docker network create kafka-net
docker run --rm \
--tty \
--interactive \
--hostname kafka-01 \
--publish 2181:2181 \
--publish 9092:9092 \
christiangda/kafka WITH_INTERNAL_ZOOKEEPER bin/kafka-server-start.sh config/server.properties
docker run --rm \
--tty \
--interactive \
--hostname kafka-01 \
--publish 9092:9092 \
--env "SERVER__ZOOKEEPER_CONNECT=127.0.0.1:2181" \
christiangda/kafka bin/kafka-server-start.sh config/server.properties
Using docker machine+swarm
docker stack deploy -c docker-compose.yml kafka-cluster
docker stack ps spark-cluster
docker stack rm kafka-cluster
If you want to cooperate with this project, please visit my Github Repo and fork it, then made your own chagest and prepare a git pull request
To build this container, you can execute the following command
git clone https://github.com/christiangda/docker-kafka
cd docker-kafka/
docker build --no-cache --rm --tag <your name>/kafka
the parametrized procedure is
git clone https://github.com/christiangda/docker-kafka
cd docker-kafka/
docker build --no-cache --rm \
--build-arg CONTAINER_JAVA_VERSION=8 \
--build-arg SCALA_VERSION=2.11 \
--build-arg KAFKA_VERSION=0.11.0.1 \
--tag <your name>/kafka:2.11-0.11.0.1 .
If you want to build kafka version >= 1.0.0 I recommend you use Java version 9
docker build --no-cache --rm \
--build-arg SCALA_VERSION=2.11 \
--build-arg KAFKA_VERSION=1.0.1 \
--tag <your name>/kafka:2.11-1.0.1 \
--tag <your name>/kafka:latest .
This module is released under the Apache License Version 2.0: