首页 > 其他分享 >k8s中部署三节点zookeeper集群

k8s中部署三节点zookeeper集群

时间:2022-10-13 22:00:44浏览次数:56  
标签:name kubernetes zookeeper value ZOO io k8s 节点

目录

目录

    k8s中部署三节点zookeeper集群

    一、部署三节点zookeeper集群注意事项

    1、 使用StatefulSet模式部署,通过headless确定节点之间的链接地址
    2、 使用数据持久化,存储zookeeper的数据(使用volumeClaimTemplates实现数据持久化)
    3、 使用初始化脚本,为不同的节点配置不同的myid


    二、yaml文件

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: zk-scripts
      namespace: mid
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    data:
      init-certs.sh: |-
        #!/bin/bash
      setup.sh: |-
        #!/bin/bash
        if [[ -f "/bitnami/zookeeper/data/myid" ]]; then
            export ZOO_SERVER_ID="$(cat /bitnami/zookeeper/data/myid)"
        else
            HOSTNAME="$(hostname -s)"
            if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
                ORD=${BASH_REMATCH[2]}
                export ZOO_SERVER_ID="$((ORD + 1 ))"
            else
                echo "Failed to get index from hostname $HOST"
                exit 1
            fi
        fi
        exec /entrypoint.sh /run.sh
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper-headless-svc
      namespace: mid
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    spec:
      type: ClusterIP
      clusterIP: None
      publishNotReadyAddresses: true
      ports:
        - name: tcp-client
          port: 2181
          targetPort: client
        - name: tcp-follower
          port: 2888
          targetPort: follower
        - name: tcp-election
          port: 3888
          targetPort: election
      selector:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: zookeeper-svc
      namespace: mid
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    spec:
      type: ClusterIP
      sessionAffinity: None
      ports:
        - name: tcp-client
          port: 2181
          targetPort: client
          nodePort: null
        - name: tcp-follower
          port: 2888
          targetPort: follower
        - name: tcp-election
          port: 3888
          targetPort: election
      selector:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: zookeeper
      namespace: mid
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
        role: zookeeper
    spec:
      replicas: 3
      podManagementPolicy: Parallel
      selector:
        matchLabels:
          app.kubernetes.io/name: zookeeper
          app.kubernetes.io/component: zookeeper
      serviceName: zookeeper-headless-svc
      updateStrategy:
        rollingUpdate: {}
        type: RollingUpdate
      template:
        metadata:
          annotations:
          labels:
            app.kubernetes.io/name: zookeeper
            app.kubernetes.io/component: zookeeper
        spec:
          serviceAccountName: default
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
                - podAffinityTerm:
                    labelSelector:
                      matchLabels:
                        app.kubernetes.io/name: zookeeper
                        app.kubernetes.io/component: zookeeper
                    namespaces:
                      - "default"
                    topologyKey: kubernetes.io/hostname
                  weight: 1
          securityContext:
            fsGroup: 1001
          initContainers:
          containers:
            - name: zookeeper
              image: bitnami/zookeeper:3.8.0-debian-10-r0
              imagePullPolicy: "IfNotPresent"
              securityContext:
                runAsNonRoot: true
                runAsUser: 1001
              command:
                - /scripts/setup.sh
              resources:
                limits: 
                  cpu: 1
                  memory: 2Gi
              env:
                - name: BITNAMI_DEBUG
                  value: "false"
                - name: ZOO_DATA_LOG_DIR
                  value: ""
                - name: ZOO_PORT_NUMBER
                  value: "2181"
                - name: ZOO_TICK_TIME
                  value: "2000"
                - name: ZOO_INIT_LIMIT
                  value: "10"
                - name: ZOO_SYNC_LIMIT
                  value: "5"
                - name: ZOO_PRE_ALLOC_SIZE
                  value: "65536"
                - name: ZOO_SNAPCOUNT
                  value: "100000"
                - name: ZOO_MAX_CLIENT_CNXNS
                  value: "60"
                - name: ZOO_4LW_COMMANDS_WHITELIST
                  value: "srvr, mntr, ruok"
                - name: ZOO_LISTEN_ALLIPS_ENABLED
                  value: "no"
                - name: ZOO_AUTOPURGE_INTERVAL
                  value: "0"
                - name: ZOO_AUTOPURGE_RETAIN_COUNT
                  value: "3"
                - name: ZOO_MAX_SESSION_TIMEOUT
                  value: "40000"
                - name: ZOO_SERVERS
                  value: >-
                    zookeeper-0.zookeeper-headless-svc.mid.svc.cluster.local:2888:3888::1
                    zookeeper-1.zookeeper-headless-svc.mid.svc.cluster.local:2888:3888::2
                    zookeeper-2.zookeeper-headless-svc.mid.svc.cluster.local:2888:3888::3
                - name: ZOO_ENABLE_AUTH
                  value: "no"
                - name: ZOO_HEAP_SIZE
                  value: "1024"
                - name: ZOO_LOG_LEVEL
                  value: "ERROR"
                - name: ALLOW_ANONYMOUS_LOGIN
                  value: "yes"
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.name
              ports:
                - name: client
                  containerPort: 2181
                - name: follower
                  containerPort: 2888
                - name: election
                  containerPort: 3888
              livenessProbe:
                failureThreshold: 6
                initialDelaySeconds: 30
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 5
                exec:
                  command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
              readinessProbe:
                failureThreshold: 6
                initialDelaySeconds: 5
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 5
                exec:
                  command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
              volumeMounts:
                - name: scripts
                  mountPath: /scripts/setup.sh
                  subPath: setup.sh
                - name: zookeeper-data
                  mountPath: /bitnami/zookeeper
          volumes:
            - name: scripts
              configMap:
                name: zk-scripts
                defaultMode: 0755
      volumeClaimTemplates:
      - metadata:
          name: zookeeper-data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "alicloud-disk-essd-cn-shanghai-b"
          resources:
            requests:
              storage: 20Gi
    

    标签:name,kubernetes,zookeeper,value,ZOO,io,k8s,节点
    From: https://www.cnblogs.com/dfdzh/p/16789859.html

    相关文章

    • Centos7部署mongodb三节点复制集
      Centos7部署mongodb三节点复制集一、安装mongodb#使用yum安装mongodb1、自定义yum源文件vim/etc/yum.repos.d/mongodb.repo[mongodb-org]name=MongoDBRepository......
    • Centos7部署Nacos单节点
      目录Centos7部署Nacos单节点一、环境准备二、部署nacos2.1、下载nacos2.2、解压部署2.3、加入systemctl管理三、浏览器访问Centos7部署Nacos单节点官网:https://nacos.io......
    • Centos7 部署es三节点集群
      目录Centos7部署es三节点集群本地dev环境es已经降级为5.6.4一、环境准备1.1、JDK1.2、禁用swap分区2.3、调整文件句柄限制2.4、调整虚拟内存限制2.5、调整线程数量限制二......
    • QT——QTreeWidget树形控件,点击节点,获取给节点设定的编号
      connect(ui.treewidget,SIGNAL(itemClicked(QTreeWidgetItem*,int)),this,SLOT(wc_fun_treewidgetTest(QTreeWidgetItem*,int)));voidMainwidget::wc_fun_tr......
    • python使用xml.dom.minidom写xml节点属性会自动排序问题解决
      1.背景及问题一个xml文件,过滤掉部分节点,生成新的xml文件,但是生成后,发现节点的属性顺序变化了,根据key的字母信息排了序。如原始信息:<stringtypename="time_type"length......
    • 剑指 Offer 22. 链表中倒数第k个节点
      题目描述:输入一个链表,输出该链表中倒数第k个节点。为了符合大多数人的习惯,本题从1开始计数,即链表的尾节点是倒数第1个节点。例如,一个链表有6个节点,从头节点开始,它们的......
    • 力扣刷题时的头节点以及指针的使用
      什么使用要用到指针在力扣刷链表相关的题时可能经常会看到,题目传递过来一个头节点,我们完全可以通过这个头节点来遍历整个链表,为什么还要使用另一个变量来等于他。刚开始学......
    • k8s笔记2(Harbor)
      1、安装官方文档通过Helm部署Harbor(​​Harbordocs|DeployingHarborwithHighAvailabilityviaHelm(goharbor.io)​​)----->nodePort方式暴露服务;----->按提示填写c......
    • k8s添加节点报[WARNING SystemVerification]: missing optional cgroups: blkio
      环境信息:   ubuntu-master01 192.1681.195.128   ubuntu-work01  192.168.195.129 k8s版本1.25.2      背景描述:初始环境是一个master......
    • Ceph使用---对接K8s集群使用案例
      一、环境准备让k8s中的pod可以访问ceph中rbd提供的镜像作为存储设备,需要在ceph创建rbd、并且让k8snode节点能够通过ceph的认证。k8s在使用ceph作为......