首页 > 其他分享 >k8s上部署zookeeper

k8s上部署zookeeper

时间:2022-10-04 16:36:13浏览次数:50  
标签:name kubernetes zk 部署 app zookeeper io k8s

一、集群部署zookeeper

1.1、指定节点部署

  • 给以下节点打上标签:k8s-node01、k8s-node02、k8s-master03【也就是我们的三个节点的集群部署在这三个节点上】
[root@k8s-master01 ~]# kubectl get nodes 
NAME           STATUS   ROLES    AGE    VERSION
k8s-master01   Ready    <none>   300d   v1.19.5
k8s-master02   Ready    <none>   300d   v1.19.5
k8s-master03   Ready    <none>   300d   v1.19.5
k8s-node01     Ready    <none>   300d   v1.19.5
k8s-node02     Ready    <none>   300d   v1.19.5

# 打标签
# 注意这里相当于也打了两个域【app.kubernetes.io/component、app.kubernetes.io/name】,调度的时候会用上
kubectl get nodes --show-labels
kubectl label nodes k8s-master03 app.kubernetes.io/component=zookeeper
kubectl label nodes k8s-node02 app.kubernetes.io/component=zookeeper
kubectl label nodes k8s-node01 app.kubernetes.io/component=zookeeper
kubectl label nodes k8s-master03 app.kubernetes.io/name=zookeeper
kubectl label nodes k8s-node01 app.kubernetes.io/name=zookeeper
kubectl label nodes k8s-node02 app.kubernetes.io/name=zookeeper

1.2、创建svc

[root@k8s-master01 集群]# cat zk-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-headless
  namespace: infra
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: tcp-client
      port: 2181
      targetPort: client
    - name: tcp-follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
---
[root@k8s-master01 集群]# cat zk-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-test
  namespace: infra
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: tcp-client
      port: 2181
      targetPort: client
      nodePort: null
    - name: tcp-follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper

1.3、创建zk启动脚本的,通过cm形式挂载进去

[root@k8s-master01 集群]# cat zk-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: infra-zk-scripts
  namespace: infra
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
data:
  init-certs.sh: |-
    #!/bin/bash
  setup.sh: |-
    #!/bin/bash
    if [[ -f "/bitnami/zookeeper/data/myid" ]]; then
        export ZOO_SERVER_ID="$(cat /bitnami/zookeeper/data/myid)"
    else
        HOSTNAME="$(hostname -s)"
        if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
            ORD=${BASH_REMATCH[2]}
            export ZOO_SERVER_ID="$((ORD + 1 ))"
        else
            echo "Failed to get index from hostname $HOST"
            exit 1
        fi
    fi
    exec /entrypoint.sh /run.sh

1.4、创建StatefulSet

  • 动态存储我使用的nfs, 生产不建议用这种,建议使用分布式动态存储例如:ceph、minio、GFS等
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk-test
  namespace: infra
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
    role: zookeeper
spec:
  replicas: 3
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app.kubernetes.io/name: zookeeper
      app.kubernetes.io/component: zookeeper
  serviceName: zk-headless
  updateStrategy:
    rollingUpdate: {}
    type: RollingUpdate
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    spec:
      serviceAccountName: default
      affinity:
        nodeAffinity:                                      # node亲和性
          requiredDuringSchedulingIgnoredDuringExecution:  # 硬策略,调度在app.kubernetes.io/component=zookeeper的节点中
            nodeSelectorTerms:
            - matchExpressions:
              - key: app.kubernetes.io/component
                operator: In
                values:
                  - zookeeper
        podAntiAffinity:                                    # Pod反亲和性
          preferredDuringSchedulingIgnoredDuringExecution:  # 软策略,使Pod分布在不同的节点上
          - weight: 49                                      # 权重,有多个策略通过权重控制调度
            podAffinityTerm:
              topologyKey: app.kubernetes.io/name           # 通过app.kubernetes.io/name作为域调度  
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/component
                  operator: In
                  values:
                  - zookeeper
      securityContext:
        fsGroup: 1001
      initContainers:
      containers:
        - name: zookeeper
          image: bitnami/zookeeper:3.8.0-debian-10-r0
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          command:
            - /scripts/setup.sh
          resources:                                       # QoS 最高等级
            limits:
              cpu: 500m
              memory: 500Mi
            requests:
              cpu: 500m
              memory: 500Mi
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: ZOO_DATA_LOG_DIR
              value: ""
            - name: ZOO_PORT_NUMBER
              value: "2181"
            - name: ZOO_TICK_TIME
              value: "2000"
            - name: ZOO_INIT_LIMIT
              value: "10"
            - name: ZOO_SYNC_LIMIT
              value: "5"
            - name: ZOO_PRE_ALLOC_SIZE
              value: "65536"
            - name: ZOO_SNAPCOUNT
              value: "100000"
            - name: ZOO_MAX_CLIENT_CNXNS
              value: "60"
            - name: ZOO_4LW_COMMANDS_WHITELIST
              value: "srvr, mntr, ruok"
            - name: ZOO_LISTEN_ALLIPS_ENABLED
              value: "no"
            - name: ZOO_AUTOPURGE_INTERVAL
              value: "0"
            - name: ZOO_AUTOPURGE_RETAIN_COUNT
              value: "3"
            - name: ZOO_MAX_SESSION_TIMEOUT
              value: "40000"
            - name: ZOO_SERVERS
              value: zk-test-0.zk-headless.infra.svc.cluster.local:2888:3888::1 zk-test-1.zk-headless.infra.svc.cluster.local:2888:3888::2 zk-test-2.zk-headless.infra.svc.cluster.local:2888:3888::3
            - name: ZOO_ENABLE_AUTH
              value: "no"
            - name: ZOO_HEAP_SIZE
              value: "1024"
            - name: ZOO_LOG_LEVEL
              value: "ERROR"
            - name: ALLOW_ANONYMOUS_LOGIN
              value: "yes"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          ports:
            - name: client
              containerPort: 2181
            - name: follower
              containerPort: 2888
            - name: election
              containerPort: 3888
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
          readinessProbe:
            failureThreshold: 6
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
          volumeMounts:
            - name: scripts
              mountPath: /scripts/setup.sh
              subPath: setup.sh
            - name: zookeeper-data
              mountPath: /bitnami/zookeeper
      volumes:
        - name: scripts
          configMap:
            name: infra-zk-scripts
            defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: zookeeper-data
    spec:
      storageClassName: infra-nfs-storage
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi

1.5、查看部署结果

  • 看到分布在3个节点
[root@k8s-master01 集群]# kubectl get po -n infra  -l app.kubernetes.io/component=zookeeper -owide 
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
zk-test-0   1/1     Running   0          10m   10.244.195.58   k8s-master03   <none>           <none>
zk-test-1   1/1     Running   0          10m   10.244.85.200   k8s-node01     <none>           <none>
zk-test-2   1/1     Running   0          10m   10.244.58.196   k8s-node02     <none>           <none>

1.6、查看zookeeper配置

[root@k8s-master01 集群]# kubectl exec -it zk-test-0 -n infra -- cat /opt/bitnami/zookeeper/conf/zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/bitnami/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=0

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
preAllocSize=65536
snapCount=100000
maxCnxns=0
reconfigEnabled=false
quorumListenOnAllIPs=false
4lw.commands.whitelist=srvr, mntr, ruok
maxSessionTimeout=40000
admin.serverPort=8080
admin.enableServer=true
server.1=zk-test-0.zk-headless.infra.svc.cluster.local:2888:3888;2181
server.2=zk-test-1.zk-headless.infra.svc.cluster.local:2888:3888;2181
server.3=zk-test-2.zk-headless.infra.svc.cluster.local:2888:3888;2181

1.7、查看zk集群状态

  • 看到说明是正常的,到此集群部署成功
[root@k8s-master01 集群]# kubectl exec -it zk-test-0 -n infra -- /opt/bitnami/zookeeper/bin/zkServer.sh status
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

[root@k8s-master01 集群]# kubectl exec -it zk-test-1 -n infra -- /opt/bitnami/zookeeper/bin/zkServer.sh status
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

[root@k8s-master01 集群]# kubectl exec -it zk-test-2 -n infra -- /opt/bitnami/zookeeper/bin/zkServer.sh status
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

1.8、zk-bdp.yaml

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-infra-pdb
  namespace: infra
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: zookeeper
  minAvailable: 2     # 滚动更新的时候至少保留2个pod,防止宕机

标签:name,kubernetes,zk,部署,app,zookeeper,io,k8s
From: https://www.cnblogs.com/hsyw/p/16753966.html

相关文章

  • k8s上部署Kafka
    一、集群部署Kafka1.1、指定节点部署给以下节点打上标签:k8s-node01、k8s-node02、k8s-master03【也就是我们的三个节点的集群部署在这三个节点上】[root@k8s-master01......
  • 模型训练和部署-Iris数据集
    温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。Fayson的github:​​https://github.com/fayson/cdhproject​​提示:代码块部分可......
  • CDSW1.4的Models功能-创建和部署模型(QuickStart)
    温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。Fayson的github:​​https://github.com/fayson/cdhproject​​提示:代码块部分可......
  • wordpress多节点部署+rsync备份图片
    基于LAMP架构搭建LB+web+mysql+nas架构,实现从web站点上传的图片自动同步(wordpress)环境:10.0.0.128apache+wordpress服务,数据库主库指向10.0.0.132,基于docker来安装,映射......
  • prometheus+grafana+node-exporter部署监控系统实战
    1、在grafana中添加prometheus数据源  2、添加dashboard       ......
  • 113-17-ZooKeeper 企业最佳实践_ev
               ......
  • 【云原生】Presto/Trino on k8s 环境部署
    目录一、概述二、环境部署1)添加源并下载编排部署包2)构建镜像3)修改配置4)开始部署5)测试验证1、mysqlcatalog测试2、hivecatalog测试6)卸载一、概述Presto是Facebook开......
  • UML _ 部署图
    13.1部署图的概念部署图定义部署图是描述一个系统运行时的硬件节点、在这些节点上运行的软件构件将在何处物理运行以及如何彼此通信的静态视图。一般一个系统仅有一......
  • K8S系列(四)——常用命令汇总
    【前言】  K8S虽然可以通过dashboard等图形化界面去管理,但是一些常用命令我们还是要记下。为了方便查看,命令按照资源类型的各种操作命令来分类,部分命令可能会存在重复情况......
  • docker compose部署项目【杭州多测师_王sir】【杭州多测师】
    1、把Apache-jmeter和docker-compose.yml文件2、dockerfile-项目名称-agent3、dockerfile-项目名称-demo4、项目名称-agent.jar和项目名称-demo.jar文件都放在同一个目......