首页 > 其他分享 >docker-compose多服务器部署kafka集群

docker-compose多服务器部署kafka集群

时间:2023-06-21 15:46:09浏览次数:51  
标签:compose zookeeper kafka 2181 10.10 docker data

  • Kafka 是一个开源的分布式事件流平台,依赖Zookeeper或者KRaft,本文基于Zookeeper。

服务器IP配置

本文使用三个服务器来做集群搭建,IP如下:

nodeName IP
node1 10.10.210.96
node2 10.10.210.97
node3 10.10.210.98

部署zookeeper

  • 工作目录为/home/zookeeper

node1配置

目录结构

- zookeeper
  - config
    - zoo.cfg
  - docker-compose.yml

zoo.cfg

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
clientPort:2181
server.1=127.0.0.1:2888:3888
server.2=10.10.210.97:2888:3888
server.3=10.10.210.98:2888:3888

docker-compose.yml

version: '3'
services:
  zookeeper:
    image: zookeeper:3.7.0
    restart: always
    hostname: zookeeper-node-1
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    - 8080:8080
    volumes:
    - ./data:/data
    - ./datalog:/datalog
    - ./config/zoo.cfg:/conf/zoo.cfg
    environment:
      ZOO_MY_ID: 1

node2配置

目录结构

- zookeeper
  - config
    - zoo.cfg
  - docker-compose.yml

zoo.cfg

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
clientPort:2181
server.1=10.10.210.96:2888:3888
server.2=127.0.0.1:2888:3888
server.3=10.10.210.98:2888:3888

docker-compose.yml

version: '3'
services:
  zookeeper:
    image: zookeeper:3.7.0
    restart: always
    hostname: zookeeper-node-2
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    - 8080:8080
    volumes:
    - ./data:/data
    - ./datalog:/datalog
    - ./config/zoo.cfg:/conf/zoo.cfg
    environment:
      ZOO_MY_ID: 2

node3配置

目录结构

- zookeeper
  - config
    - zoo.cfg
  - docker-compose.yml

zoo.cfg

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
clientPort:2181
server.1=10.10.210.96:2888:3888
server.2=10.10.210.97:2888:3888
server.3=127.0.0.1:2888:3888

docker-compose.yml

version: '3'
services:
  zookeeper:
    image: zookeeper:3.7.0
    restart: always
    hostname: zookeeper-node-3
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    - 8080:8080
    volumes:
    - ./data:/data
    - ./datalog:/datalog
    - ./config/zoo.cfg:/conf/zoo.cfg
    environment:
      ZOO_MY_ID: 3
  • 在对应服务器的/home/zookeeper执行 docker-compose up -d 启动三个Zookeeper服务,通过docker-compose logs -f观察启动日志
  • ZOO_MY_ID 对应zookeeper的id,多台服务器需设置不同,对应zoo.cfg的server.1,其中.1 就是对应的ZOO_MY_ID
  • zoo.cfg配置信息具体可参考 Zookeeper部署和管理指南

部署kafka

  • 工作目录为/home/kafka

node1配置

目录结构

- kafka
  - docker-compose.yml
  - config/server.properties

docker-compose.yml

version: '3'
services:
  kafka:
    image: bitnami/kafka:3.0.0
    restart: always
    hostname: kafka-node-1
    container_name: kafka
    ports:
    - 9092:9092
    - 9999:9999
    volumes:
    - ./logs:/opt/bitnami/kafka/logs
    - ./data:/bitnami/kafka/data
    - ./config/server.properties:/opt/bitnami/kafka/config/server.properties

server.properties

broker.id=1
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.10.210.96:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bitnami/kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.10.210.96:2181,10.10.210.97:2181,10.10.210.98:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
max.partition.fetch.bytes=1048576
max.request.size=1048576
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=

node2配置

目录结构

- kafka
  - docker-compose.yml
  - config/server.properties

docker-compose.yml

version: '3'
services:
  kafka:
    image: bitnami/kafka:3.0.0
    restart: always
    hostname: kafka-node-2
    container_name: kafka
    ports:
    - 9092:9092
    - 9999:9999
    volumes:
    - ./logs:/opt/bitnami/kafka/logs
    - ./data:/bitnami/kafka/data
    - ./config/server.properties:/opt/bitnami/kafka/config/server.properties

server.properties

broker.id=2
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.10.210.97:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bitnami/kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.10.210.96:2181,10.10.210.97:2181,10.10.210.98:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
max.partition.fetch.bytes=1048576
max.request.size=1048576
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=

node3配置

目录结构

- kafka
  - docker-compose.yml
  - config/server.properties

docker-compose.yml

version: '3'
services:
  kafka:
    image: bitnami/kafka:3.0.0
    restart: always
    hostname: kafka-node-3
    container_name: kafka
    ports:
    - 9092:9092
    - 9999:9999
    volumes:
    - ./logs:/opt/bitnami/kafka/logs
    - ./data:/bitnami/kafka/data
    - ./config/server.properties:/opt/bitnami/kafka/config/server.properties

server.properties

broker.id=3
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.10.210.98:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/bitnami/kafka/data
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.10.210.96:2181,10.10.210.97:2181,10.10.210.98:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
max.partition.fetch.bytes=1048576
max.request.size=1048576
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=
  • 在对应服务器的/home/kafka执行 docker-compose up -d 启动三个Kafka服务,通过docker-compose logs -f观察启动日志
  • server.properties配置信息具体可参考 Kafka Broker Configs

kafka测试使用

  • 通过offset explorer测试连接kafka是否可用。

后记

  • 如果想要简单配置的情况下,可以通过environment的方式启动kafka,参考如下:

docker-compose.yml

version: '3'
services:
  kafka:
    image: bitnami/kafka:3.0.0
    restart: always
    hostname: kafka-node
    container_name: kafka
    ports:
    - 9092:9092
    - 9999:9999
    environment:
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.10.210.96:9092
      - KAFKA_ADVERTISED_HOST_NAME=10.10.210.96
      - KAFKA_ADVERTISED_PORT=9092
      - KAFKA_ZOOKEEPER_CONNECT=10.10.210.96:2181,10.10.210.97:2181,10.10.210.98:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
      - JMX_PORT=9999 
    volumes:
    - ./logs:/opt/bitnami/kafka/logs
    - ./data:/bitnami/kafka/data

标签:compose,zookeeper,kafka,2181,10.10,docker,data
From: https://www.cnblogs.com/chinalxx/p/17496277.html

相关文章

  • logstash1 - kafka - logstash2 - elasticsearch - kibana - 运维神器
    0.拓扑图官网: http://kafka.apache.org/documentation.html#introductionkafka原理 https://www.jianshu.com/p/e64d57d467ec?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation1.logstash的配置[root@VM_0_4_centosconfig]#ca......
  • 如何配置docker通过代理服务器拉取镜像
    如果docker所在的环境是通过代理服务器和互联网连通的,那么需要一番配置才能让docker正常从外网正常拉取镜像。然而仅仅通过配置环境变量的方法是不够的。本文结合已有文档,介绍如何配置代理服务器能使docker正常拉取镜像。本文使用的docker版本是docker--versionDockerve......
  • 使用docker compose部署emqx集群
    1、docker-compose.yml文件,内容如下:version:'3'services:emqx1:image:emqx:5.0.26container_name:emqx1environment:-"[email protected]"-"EMQX_CLUSTER__DISCOVERY_STRATEGY=static"-......
  • docker-k8s-日志分析
    1.mac安全模式关了2.docker源改了,国内的3、k8s的git仓库,匹配到版本后,下载了。依旧无法正常启动···2023062113:36:45第一次日志记录2023-06-2113:31:23.630123+0800localhostcom.docker.backend[17097]:(0fd28985)ead7cf58-KubernetesManagerC<-Sd75b0a51-VMD......
  • docker部署
    dockerdocker是一个开源的应用容器引擎,用于开发应用、交付(shipping)应用、运行应用,让开发者可以打包他们的应用以及依赖包到一个可移植的镜像中,然后发布到云服务器的Linux上。docker容器是轻量级的虚拟机,可以将操作系统底层虚拟机化,而虚拟机则是虚拟化硬件docker行了进一步的封......
  • 根据ubuntu:20.04制作python环境docker镜像
    因为有个算法是python写的,要在服务器上调用,之前是直接根据jdk镜像制作的环境,现在要装python,jdk双环境,只能自己制作一个镜像出来了,命令如下FROMubuntu:20.04ENVTZ=Asia/ShanghaiENVLANGC.UTF-8RUNmv/etc/apt/sources.list/etc/apt/sources.list.bakCOPYsources.li......
  • Docker部署clickhouse
    Clickhouse特点完备的DBMS:不仅是个数据库,也是个数据库系统列存储和数据压缩:典型的olap数据库特性向量化并行:利用CPU的SIMD(SingleINstructionMUltipleData),单条指令操作多条数据多线程并行:向量化并行利用硬件采取数据并行(缺陷:不适应较多分支的判断),多线程级并行提高并发关系......
  • Docker 命令
    Docker命令attach登录到容器直接操作build 构建镜像commit将容器打包成另一个镜像cp   在容器与本地系统间复制文件create创建一个新的容器,不启动diff  查看容器中新增或修改的目录及文件events查看docke......
  • 通过 docker-compose 快速部署 DolphinScheduler 保姆级教程
    目录一、概述二、前期准备1)部署docker2)部署docker-compose三、安装MySQL数据库四、安装注册中心Zookeeper五、ApacheDolphinScheduler编排部署1)下载DolphinScheduler安装包2)配置2)安装MySQL驱动3)启动脚本bootstrap.sh4)构建镜像Dockerfile5)编排docker-compose.yaml6)开......
  • kafka学习之三_信创CPU下单节点kafka性能测试验证
    kafka学习之三_信创CPU下单节点kafka性能测试验证背景前面学习了3controller+5broker的集群部署模式.晚上想着能够验证一下国产机器的性能.但是国产机器上面的设备有限.所以想着进行单节点的安装与测试.并且记录一下简单结果希望对以后的工作有指导意义发现producer......