首页 > 其他分享 >docker-compose部署SASL认证的kafka

docker-compose部署SASL认证的kafka

时间:2023-10-26 09:55:54浏览次数:50  
标签:compose log zookeeper appender KAFKA SASL kafka log4j

前言

测试服务器:10.255.60.149

一. 编写docker-compose文件

1.docker-compose.yml

version: '3.8'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    volumes:
       - /data/zookeeper/data:/data
       - /home/docker-compose/kafka/config:/opt/zookeeper-3.4.13/conf/
       - /home/docker-compose/kafka/config:/opt/zookeeper-3.4.13/secrets/ 
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper-3.4.13/secrets/server_jaas.conf
    ports:
      - 12181:2181
    restart: always
  kafka_node1:
    image: wurstmeister/kafka
    container_name: kafka_node1
    depends_on:
      - zookeeper
    ports: 
      - 9092:9092
    volumes:
      - /home/docker-compose/kafka/data:/kafka
      - /home/docker-compose/kafka/config:/opt/kafka/secrets/
    environment:
      KAFKA_BROKER_ID: 0
      KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://10.255.60.149:9092
      KAFKA_ADVERTISED_PORT: 9092 
      KAFKA_LISTENERS: SASL_PLAINTEXT://0.0.0.0:9092
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
      KAFKA_PORT: 9092 
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
      KAFKA_SUPER_USERS: User:admin
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true" #设置为true,ACL机制为黑名单机制,只有黑名单中的用户无法访问,默认为false,ACL机制为白名单机制,只有白名单中的用户可以访问
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_HEAP_OPTS: "-Xmx512M -Xms16M"
      KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jaas.conf
    restart: always
 ##  kafdrop 监控kafka的Ui工具 
  kafdrop:
    image: obsidiandynamics/kafdrop
    restart: always
    ports:
       - "19001:9000"
    environment:
       KAFKA_BROKERCONNECT: "kafka_node1:9092"
    ## 如kafka开启了sasl认证后以下 sasl认证链接是必要的,下面的事经过base64加密后的结果
       KAFKA_PROPERTIES: c2FzbC5tZWNoYW5pc206IFBMQUlOCiAgICAgIHNlY3VyaXR5LnByb3RvY29sOiBTQVNMX1BMQUlOVEVYVAogICAgICBzYXNsLmphYXMuY29uZmlnOiBvcmcuYXBhY2hlLmthZmthLmNvbW1vbi5zZWN1cml0eS5zY3JhbS5TY3JhbUxvZ2luTW9kdWxlIHJlcXVpcmVkIHVzZXJuYW1lPSdhZG1pbicgcGFzc3dvcmQ9J2pkeXgjcXdlMTInOw==
    depends_on:
      - zookeeper
      - kafka_node1
    cpus: '1'
    mem_limit: 1024m
    container_name: kafdrop
    restart: always

把此文件放入/home/docker-compose目录下,文件夹可自定义,不过要注意修改yml文件中的挂载位置

其中KAFKA_PROPERTIES使用base64解密后内容如下

sasl.mechanism: PLAIN
      security.protocol: SASL_PLAINTEXT
      sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username='admin' password='jdyx#qwe12';

对文件内容进行base64加密方式为

base64 file
或者
cat file | base64

二.添加配置文件

在宿主机新建目录/home/docker-compose/kafka/config,然后新增下面配置文件

其中log4j.properties和configuration.xsl 可以使用docker cp 命令 从zookeeper容器copy出来的,文件没变化

1. server_jaas.conf
Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="admin"
    password="jdyx#qwe12";
}; 
Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="admin"
    password="jdyx#qwe12"
    ## user_用户名="密码" 这种格式是用来配置账号跟密码的
    user_super="jdyx#qwe12"
    user_admin="jdyx#qwe12";
};
KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="jdyx#qwe12"  
  ## user_用户名="密码"  这种格式是用来配置账号跟密码的
    user_admin="jdyx#qwe12";
};

KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="jdyx#qwe12";
};
2. zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1

## 开启SASl关键配置
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme=sasl

jaasLoginRenew=3600000
zookeeper.sasl.client=true
3. configuration.xsl
<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="html"/>
<xsl:template match="configuration">
<html>
<body>
<table border="1">
<tr>
 <td>name</td>
 <td>value</td>
 <td>description</td>
</tr>
<xsl:for-each select="property">
<tr>
  <td><a name="{name}"><xsl:value-of select="name"/></a></td>
  <td><xsl:value-of select="value"/></td>
  <td><xsl:value-of select="description"/></td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
4. log4j.properties
# Define some default values that can be overridden by system properties
zookeeper.root.logger=INFO, CONSOLE
zookeeper.console.threshold=INFO
zookeeper.log.dir=.
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=DEBUG
zookeeper.tracelog.dir=.
zookeeper.tracelog.file=zookeeper_trace.log

#
# ZooKeeper Logging Configuration
#

# Format is "<default threshold> (, <appender>)+

# DEFAULT: console appender only
log4j.rootLogger=${zookeeper.root.logger}

# Example with rolling log file
#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE

# Example with rolling log file and tracing
#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE

#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

#
# Add ROLLINGFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}

# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=10

log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n


#
# Add TRACEFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}

log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n

三. 搭建与测试

执行docker-compse up -d,正常情况下容器可成功运行,多查看日志排查下

编写kafkacat.conf

security.protocol=SASL_PLAINTEXT
sasl.mechanisms=PLAIN
sasl.username=admin
sasl.password=jdyx#qwe12

安装kafkacat

ubuntu安装:apt install kafkacat
centos安装:参考https://blog.51cto.com/u_15067222/4321386

测试命令

echo '{"data":[{"value":"00002192","type":2,"id":300011},{"value":"00000D7
A","type":2,"id":300009}],"plcId":0,"endTime":1571894892,"startTime":157189 1260,"tag":28690}' | kafkacat -b 10.255.60.149:9092 -t device-data -P -F ./kafkacat.conf

kafkacat -C -b 10.255.60.149:9092 -t device-data -F ./kafkacat.conf

https://www.cnblogs.com/xdao/p/10674848.html

参考

https://www.cnblogs.com/zhouj850/p/15630101.html

https://www.luckyhe.com/post/95.html

标签:compose,log,zookeeper,appender,KAFKA,SASL,kafka,log4j
From: https://www.cnblogs.com/regit/p/17788724.html

相关文章

  • helm部署kafka鉴权以及ACL
    官方文档https://github.com/bitnami/charts/tree/main/bitnami/kafkahttps://blog.csdn.net/u011618288/article/details/129105777(包含zookeeper与broker认证、鉴权流程)一.修改values.yaml文件按通用部署方案拉下来kafka安装包,修改values.yaml文件,开启scram鉴权,ACL以......
  • kafka-ACL
    本文档时在centos7直接部署添加认证的kafka文件基础上,做下面的修改实现ACL访问控制topic参考:https://www.seaxiang.com/blog/Qpsqii一.添加多个kafka用户及相关的配置文件1.kafka_server_jaas.confKafkaServer{org.apache.kafka.common.security.plain.PlainLoginModule......
  • centos7直接部署添加认证的kafka
    前言测试服务器:10.255.60.149一.安装jdk官网下载jdk1.8版本以上的https://www.oracle.com/java/technologies/downloads/测试系统版本为centos7,选择了最后一个下载后,使用rpm-ivh即可安装二.安装zookeeper和kafka软件版本:kafka_2.12-2.4.0(带zookeeper)下载链接:http://a......
  • docker-compose: command not found问题的两种常用方法
    docker-compose:commandnotfounddocker-compose是什么Compose定位是「定义和运行多个Docker容器的应用(Definingandrunningmulti-containerDockerapplications)」,其前身是开源项目Fig。在日常工作中,经常会碰到需要多个容器相互配合来完成某项任务的情况。例如要实现一个......
  • docker-compose 外部配置部署 java 项目原创
    有的项目写完,需要打包到不同的环境。所以配置一个外部yml配置文件会方便很多,不用重新打包。文件目录构造├──app├──application.yml├──app.jar├──Dockerfile├──mysql├──nginxdocker-compose.ymlw.sh复制DockerfileFROMjava:8#VO......
  • Compose动画原理-我的一点小思考
    思想Compose的动画系统是基于值系统的动画,和传统的基于回调的动画不同,Compose的动画api通常对外暴露一个可观察的随时间改变的状态,进而驱动重组或者重绘,从而达成动画的效果基本使用可见性动画使用AnimatedVisibility,内容尺寸动画用animateContentSize,根据不同的状态展示不同的Com......
  • docker-compose安装 es 和 kibana
    1、docker-compose.ymlversion:'3'services:es_01:image:elasticsearch:7.1.0container_name:es_01environment:#以单一节点模式启动-discovery.type=single-node#设置使用jvm内存大小-ES_JAVA_OPTS=-Xms128m-Xmx512m......
  • docker入门加实战—项目部署之DockrCompose
    docker入门加实战—项目部署之DockrCompose我们部署一个简单的java项目,可能就包含3个容器:MySQLNginxJava项目而稍微复杂的项目,其中还会有各种各样的其它中间件,需要部署的东西远不止3个。如果手动的逐一部署,就太麻烦了,同时也无法保证完整性。而DockerCompose就可以帮助我......
  • Docker-Compose
    目录1.简介2.Composeyml规则3.Compose示例3.1Python-web应用3.2wordpress应用1.简介DockerCompose是Docker官方的开源项目,作用是通过docker-compose.yml定义运行多个容器官方文档Compose概念:Services:容器,应用,例如:web、redis、mysql...Project:一组关联的容......
  • docker-compose
    Compose简介Compose项目是Docker官方的开源项目,负责实现对Docker容器集群的快速编排。从功能上看,跟OpenStack中的Heat十分类似。其代码目前在https://github.com/docker/compose上开源。Compose定位是「定义和运行多个Docker容器的应用(Definingandrunnin......