Build a kafka cluster with SSL with docker / docker-compose and Java Client

lff l
4 min readMay 11, 2020

--

Requirements:

  1. A kafka cluster with 3 brokers
  2. The communication between 3 brokers should be encrypted (SSL)
  3. The communication between brokers and Java Client should be encrypted.

After some test, I finally did that. The following docs online give me great help:

  1. https://www.cnblogs.com/huxi2b/p/7427815.html

NOTE: This should be used only for evaluation. Do not use this in PROD. SSL communication between kafka instances are poor in performance.

Software needed:

  1. openssl (From CentOS)
  2. keytool (From jdk)
  3. docker/docker-compose

System Requirement. On a linux machine (192.168.100.129) the following containers are running (Zookeepers, kafka brokers). The 9091 port on linux is mapped to the port 9092 of container kafka broker 1. The 9092 port on linux is mapped to the port 9092 of container kafka broker 2. The 9093 port on linux is mapped to the port 9092 of container kafka broker 3.

So client will connect to this cluster as

192.168.100.129:9091,192.168.100.129:9092,192.168.100.129:9093

Step.1 Prepare the kafka container.

Because the default container does not support setting these parameters in ENV, we must build our own docker container for later usage.

Dockerfile:

Note: password Aa123456! is set for keystore/truststore. We will use this password later when creating the keystore/truststore.

Then run docker build . -t my_kafka:latest to build this new docker image. After that, you should get a successfully built image. This image (my_kafka:latest) will be used later.

Step.2 Create a docker-compose.yml file and add zookeeper support

Public docker-hub zookeeper images can be used.

Step.3 Create the keystore, truststore, CA, Certificate Sign Request file,and sign the certificates of the 3 brokers with CA.

First, we need to modify openssl configuration to enable SAN names and add our broker name and host ip. The updated openssl configuration file (openssl.cnf)

At the end part of the file, 3 broker names (broker1, broker2, broker3) and the public ip addresss (192.168.100.129) is set in [ alt_names ] section. Make sure to update the ip address if yours is not the same.

Second, run the following bash script to generate the keystore and truststore.

1. This script file need to be placed under the same directory of the openssl.cnf in above step;

2. keytool and openssl must be on PATH

Remember to change the CLUSTER_IP and PASSWORD if your IP is different.

Run the script, and if everything is fine, you will get the files under BASE_DIR/certificates . For default it is /tmp/kafka/certificates :

-rw-r--r-- 1 root root 1.3K May  7 10:30 ca-cert
-rw-r--r-- 1 root root 41 May 7 10:30 ca-cert.srl
-rw------- 1 root root 1.9K May 7 10:30 ca-key
-rw-r--r-- 1 root root 1.2K May 7 10:30 kafka-cert_1
-rw-r--r-- 1 root root 1.3K May 7 10:30 kafka-cert_1-signed
-rw-r--r-- 1 root root 1.2K May 7 10:30 kafka-cert_2
-rw-r--r-- 1 root root 1.3K May 7 10:30 kafka-cert_2-signed
-rw-r--r-- 1 root root 1.2K May 7 10:30 kafka-cert_3
-rw-r--r-- 1 root root 1.3K May 7 10:30 kafka-cert_3-signed
-rw-r--r-- 1 root root 11K May 7 10:30 kafka.keystore
-rw-r--r-- 1 root root 982 May 7 10:30 kafka.truststore

Only kafka.keystore and kafka.truststore is useful. Other files should be kept as a secret.

Step.4, Add kafka service configurations to the docker-compose.yml in Step.2

For the configuration

KAFKA_ADVERTISED_HOST_NAME: 192.168.100.129      KAFKA_ADVERTISED_PORT: 9091          
KAFKA_HOST_NAME: broker1
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 KAFKA_LISTENERS: SSL://broker1:9092 KAFKA_ADVERTISED_LISTENERS: SSL://192.168.100.129:9091 KAFKA_HEAP_OPTS: "-Xmx256M -Xms128M" KAFKA_INTER_BROKER_LISTENER_NAME: SSL

Change all the ip address 192.168.100.129 to your public ip, if needed. No need to change the broker1 because its an internal hostname and can be resolved by docker-compose.

And for each volume section, the truststore / keystore file is mounted to /certificatesfolder of the kafka container. This is because in Step1. we put the keystore/truststore settings in server.properties (server.keystore.location, server,truststore.location). You also need to change the location if the BASE_DIR/certificates in the script was updated.

volumes:          - /tmp/kafka/certificates/kafka.truststore:/certificates/kafka.truststore          - /tmp/kafka/certificates/kafka.keystore:/certificates/kafka.keystore

Step.5, start the services

run docker-compose up -d under the same directory with the docker-compose.ymlto start zookeepers, and kafka clusters. If everything is fine, we will get a kafka cluster with SSL enabled.

Step.6, test with Client (Java).

Before running, need to copy the keystore/truststore to the machine on which Java Class will be running.

Producer Class:

Please pay attention to the password settings, keystore/truststore setting. If needed update accordingly.

Consumer Class (Also change the password settings, keystore/truststore setting if needed)

--

--

Responses (2)