首页 > 其他分享 >Helm部署Zookeeper+Kafka集群

Helm部署Zookeeper+Kafka集群

时间:2022-10-20 00:15:14浏览次数:38  
标签:zookeeper -- kafka1 Zookeeper infra cluster kafka Helm Kafka

三、Helm部署Zookeeper集群

3.1、helm准备

# Helm客户端安装文档
https://helm.sh/docs/intro/install/

# 添加bitnami和官方helm仓库:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add stable https://charts.helm.sh/stable

# 更新仓库
helm  repo update

3.1、部署Zookeeper、Kafka集群

# sc
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: infra-nfs-zk
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"   # 设置为"false"时删除PVC不会保留数据,"true"则保留数据
  
# pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-zk
  namespace: infra
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: infra-nfs-zk
  • 安装方式一:先下载后安装
# 查看版本
[root@k8s-master01 helm]# helm search repo zookeeper
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/zookeeper               10.2.3          3.8.0           Apache ZooKeeper provides a reliable, centraliz...
bitnami/dataplatform-bp1        12.0.2          1.0.1           DEPRECATED This Helm chart can be used for the ...
bitnami/dataplatform-bp2        12.0.5          1.0.1           DEPRECATED This Helm chart can be used for the ...
bitnami/kafka                   19.0.0          3.3.1           Apache Kafka is a distributed streaming platfor...
bitnami/schema-registry         6.0.0           7.2.2           Confluent Schema Registry provides a RESTful in...
bitnami/solr                    6.2.2           9.0.0           Apache Solr is an extremely powerful, open sour...

# 查看zookeeper包的历史版本
helm search repo zookeeper -l

# pull 包
helm pull bitnami/zookeeper

# 解压
[root@k8s-master01 helm]# tar -xf zookeeper-10.2.3.tgz  && cd zookeeper/

# 更改配置文件
# sc name
persistence.storageClass:"infra-nfs-zk"
dataLogDir.existingClaim: "pvc-zk"
replicaCount: 3
# tls.client.enabled: false 默认关闭

# 修改values.yaml相应配置:副本数、auth、持久化
[root@k8s-master01 zookeeper]# helm install -n infra zookeeper .
NAME: zookeeper
LAST DEPLOYED: Wed Oct 19 23:01:23 2022
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 10.2.3
APP VERSION: 3.8.0

** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zookeeper.infra.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace infra -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    kubectl port-forward --namespace infra svc/zookeeper 2181:2181 &
    zkCli.sh 127.0.0.1:2181
    
# 查看部署结果
[root@k8s-master01 helm]# kubectl get po -n infra 
NAME                                      READY   STATUS    RESTARTS   AGE
zookeeper-0                               1/1     Running   0          3m49s
zookeeper-1                               1/1     Running   0          3m53s
zookeeper-2                               1/1     Running   0          3m49s
  • 安装方式二:直接安装kafka
[root@k8s-master01 helm]# helm install kafka1 bitnami/kafka --set zookeeper.enabled=false --set replicaCount=3 --set externalZookeeper.servers=zookeeper --set persistence.enabled=false -n infra
NAME: kafka1
LAST DEPLOYED: Wed Oct 19 23:34:33 2022
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 19.0.0
APP VERSION: 3.3.1

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka1.infra.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka1-0.kafka1-headless.infra.svc.cluster.local:9092
    kafka1-1.kafka1-headless.infra.svc.cluster.local:9092
    kafka1-2.kafka1-headless.infra.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka1-client --restart='Never' --image docker.io/bitnami/kafka:3.3.1-debian-11-r1 --namespace infra --command -- sleep infinity
    kubectl exec --tty -i kafka1-client --namespace infra -- bash

    PRODUCER:
        kafka-console-producer.sh \
            
            --broker-list kafka1-0.kafka1-headless.infra.svc.cluster.local:9092,kafka1-1.kafka1-headless.infra.svc.cluster.local:9092,kafka1-2.kafka1-headless.infra.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            
            --bootstrap-server kafka1.infra.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
            
# 查看部署结果
[root@k8s-master01 ~]# kubectl get po -n infra 
NAME                                      READY   STATUS              RESTARTS   AGE
kafka1-0                                  1/1     Running             0          14m
kafka1-1                                  0/1     Running             0          14m
kafka1-2                                  1/1     Running             0          14m
# Kafka验证
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.8.0-debian-10-r30 --namespace
public-service --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace public-service -- bash

# 生产者:
kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.public-service.svc.cluster.local:9092,kafka-1.kafka-headless.public-
service.svc.cluster.local:9092,kafka-2.kafka-headless.public-service.svc.cluster.local:9092 \
--topic test

# 消费者:
kafka-console-consumer.sh \
--bootstrap-server kafka.public-service.svc.cluster.local:9092 \
--topic test \
--from-beginning

标签:zookeeper,--,kafka1,Zookeeper,infra,cluster,kafka,Helm,Kafka
From: https://www.cnblogs.com/hsyw/p/16808282.html

相关文章

  • helm安装EFK
    添加efk提供的repo仓库helmrepoaddelastichttps://helm.elastic.cohelmreposearchelastic安装helminstallelasticsearch-nops--setvolumeClaimTemplate.......
  • Dubbo 04: zookeeper注册中心
    借助zookeeper注册中心进一步改正直连式+接口工程的不足,更好的管理服务者提供的功能以及消费者对服务的申请访问需要用到3个相互独立的maven工程,工程1为maven的jav......
  • kafka随记
    一、概述1.定义传统定义:kafka是一个分布式的基于发布/订阅模式的消息队列最新定义:kafka是一个开源的分布式事件流平台,被数千家公司用于高性能数据管道、流分析、数据集......
  • 深入理解 ZooKeeper的ACL实现
    2020-02-08补充流程图如果对您有帮助,欢迎点赞支持,如果有不对的地方,欢迎指出批评什么是ACL(AccessControlList)zookeeper在分布式系统中承担中间件的作用,它管理的每......
  • 深入理解 ZooKeeper客户端与服务端的watcher回调
    2020-02-08补充本篇博文所描述的watcher回调的流程图watcher存在的必要性举个特容易懂的例子:假如我的项目是基于dubbo+zookeeper搭建的分布式项目,我有三个功能相同的服......
  • Linux安装Kafka(Docker方式)
    安装步骤(已亲测好使):#笔者版本ZOOKEEPER_VERSION=3.4.13//DockerVersion=18.03.1-ee-3#拉zookeeper镜像dockerpullwurstmeister/zookeeper#笔者版本KAFKA_VERSION......
  • kafka 按时间戳消费
    步骤获取当前topic的分区列表利用offsets_for_times()+时间戳查找给定分区的偏移量,如:找到开始时间的偏移量循环每个分区,设置偏移量根据end_offset或结束时间退......
  • 技术分享| 消息队列Kafka群集部署
    一、简介1、介绍Kafka是最初由Linkedin公司开发,是一个分布式、分区的、多副本的、多订阅者,基于zookeeper协调的分布式日志系统(也可以当做MQ系统),常见可以用于web/nginx日志......
  • Dinky的使用——kafka2mysql
    需求:通过在kafka的topic里面传入json串,再把数据同步到mysql中,这个也可以作为半结构化数据同步的案例一、添加依赖包将依赖包放到dinky的pulgins目录和flink的lib目录下,并......
  • 自动生成模拟数据发至kafka topic
    自动生成一下json数据脚本json数据样例{"provinceCode":"290","companyName":"test","appId":"10","appName":"apptest","eventTime":"2022-10-1709:52:","errorTy......