Docker搭建kafka集群
kafka中的基本概念
broker:消息中间件处理节点,一个broker就是一个kafka节点,一个或者多个broker就组成了一个kafka集群
topic:kafka根据topic对消息进行归类,发布到kafka集群的每个消息,都要指定一个topic
producer:消息生产者,向broker发送消息的客户端
consumer:消息消费者,从broker读取消息的客户端
kafka特性描述
生产者将消息发送给broker,broker会将消息保存在本地的日志文件中
消息的保存是有序的,通过offset偏移量来描述消息的有序性
消费者消费消息时,也是通过offset来描述当前要消费的那条消息的位置
消息相关
如果多个消费者在同一个消费者组中,那么只有一个消费者可以收到订阅topic中的消息,换言之,同一个消费组中只有一个消费者能收到一个topic中的消息
多播消息:不同的消费组订阅同一个topic,不同的消费组中只有一个消费者能收到消息,实际上也是多个消费组中的多个消费者收到了消息
kafka消息积压问题
消息积压问题的出现:消息的消费者的消费速度远远赶不上生产者生产消息的速度,导致kafka中有大量的数据没有被消费,随着没有被消费的消息越来越多,消费者寻址的性能越来越差,最后导致整个kafka对外提供的服务的性能越来越差,从而造成其它服务的访问速度很慢,造成服务雪崩。
消息积压的解决方案:
在这个消费者中,使用多线程,充分利用机器的性能进行消费消息;
创建多个消费组,多个消费者,部署到其它机器上,一起消费,提高消费者消费消息的速度;
创建一个消费者,该消费者在kafka另建一个主题,配上多个分区,多个分区再配上多个消费者,该消费者poll下来的消息,直接转发到新的主题上,使用多个消费者消费新主题的消息–该方法不常用
Docker 搭建kafka集群
docker下载kafka镜像
docker search kafka
docker pull bitnami/kafka
二、docker-compose.yml配置
version: '3.9'
services:
zoo1:
image: wurstmeister/zookeeper
restart: always
hostname: zoo1
container_name: zoo1
ports:
- 2181:2181
- 2888:2888
- 3888:3888
volumes:
- /data/wangzunbin/volume/zkcluster/zoo1/data:/data:Z
- /data/wangzunbin/volume/zkcluster/zoo1/datalog:/datalog:Z
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
network_mode: host
kafka1:
image: wurstmeister/kafka
restart: always
hostname: kafka1
container_name: kafka1
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 10.0.0.95
KAFKA_HOST_NAME: 10.0.0.95
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.95:9092
KAFKA_LISTENERS: PLAINTEXT://10.0.0.95:9092
volumes:
- /data/wangzunbin/volume/kfkluster/kafka1/logs:/kafka:Z
network_mode: host
kafka2节点服务器配置:
version: '3.1'
services:
zoo2:
image: wurstmeister/zookeeper
restart: always
hostname: zoo2
container_name: zoo2
ports:
- 2181:2181
- 2888:2888
- 3888:3888
volumes:
- /data/wangzunbin/volume/zkcluster/zoo2/data:/data:Z
- /data/wangzunbin/volume/zkcluster/zoo2/datalog:/datalog:Z
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
network_mode: host
kafka2:
image: wurstmeister/kafka
restart: always
hostname: kafka2
container_name: kafka2
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 10.0.0.187
KAFKA_HOST_NAME: 10.0.0.187
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.187:9092
KAFKA_LISTENERS: PLAINTEXT://10.0.0.187:9092
volumes:
- /data/wangzunbin/volume/kfkluster/kafka2/logs:/kafka:Z
network_mode: host
kafka3节点服务器配置:
version: '3.1'
services:
zoo3:
image: wurstmeister/zookeeper
restart: always
hostname: zoo3
container_name: zoo3
ports:
- 2181:2181
- 2888:2888
- 3888:3888
volumes:
- /data/wangzunbin/volume/zkcluster/zoo3/data:/data:Z
- /data/wangzunbin/volume/zkcluster/zoo3/datalog:/datalog:Z
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=10.0.0.95:2888:3888;2181 server.2=10.0.0.187:2888:3888;2181 server.3=10.0.0.115:2888:3888;2181
network_mode: host
kafka3:
image: wurstmeister/kafka
restart: always
hostname: kafka3
container_name: kafka3
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: 10.0.0.115
KAFKA_HOST_NAME: 10.0.0.115
KAFKA_ADVERTISED_PORT: 9092
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.0.0.115:9092
KAFKA_LISTENERS: PLAINTEXT://10.0.0.115:9092
volumes:
- /data/wangzunbin/volume/kfkluster/kafka3/logs:/kafka:Z
network_mode: host
kafka-manager节点服务器配置:
version: '3.1'
services:
kafka-manager:
image: sheepkiller/kafka-manager
restart: always
hostname: kafka-manager
container_name: kafka-manager
ports:
- 9000:9000
environment:
ZK_HOSTS: 10.0.0.95:2181,10.0.0.187:2181,10.0.0.115:2181
KAFKA_BROKERS: 10.0.0.95:9092,10.0.0.187:9092,10.0.0.115:9092
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
network_mode: host
三、配置完成后,docker compose up -d 启动所有程序,节点会自动通过zk连接完成
标签:10.0,2888,9092,KAFKA,2181,集群,Docker,kafka From: https://www.cnblogs.com/velloLei/p/18529776