首页 > 其他分享 >86-云原生操作系统-Zookeeper集群业务容器化生产案例

86-云原生操作系统-Zookeeper集群业务容器化生产案例

时间:2023-04-09 23:32:08浏览次数:41  
标签:zookeeper 操作系统 86 Zookeeper datadir mooreyxia K8s root name

  • 案例业务逻辑

86-云原生操作系统-Zookeeper集群业务容器化生产案例_nfs

  • 实现步骤
  • 构建zookeeper镜像
#准备构建镜像的文件
[root@K8s-ansible zookeeper]#chmod a+x *.sh
[root@K8s-ansible zookeeper]#ll
total 36900
drwxr-xr-x  4 root root     4096 Apr  9 13:47 ./
drwxr-xr-x 11 root root     4096 Apr  9 02:59 ../
-rw-r--r--  1 root root     1758 Apr  9 13:11 Dockerfile
-rw-r--r--  1 root root    63587 Apr  9 02:59 KEYS
drwxr-xr-x  2 root root     4096 Apr  9 02:59 bin/
-rwxr-xr-x  1 root root      264 Apr  9 02:59 build-command.sh*
drwxr-xr-x  2 root root     4096 Apr  9 02:59 conf/
-rwxr-xr-x  1 root root      278 Apr  9 13:47 entrypoint.sh*
-rw-r--r--  1 root root       91 Apr  9 02:59 repositories
-rw-r--r--  1 root root     2270 Apr  9 02:59 zookeeper-3.12-Dockerfile.tar.gz
-rw-r--r--  1 root root 37676320 Apr  9 02:59 zookeeper-3.4.14.tar.gz
-rw-r--r--  1 root root      836 Apr  9 02:59 zookeeper-3.4.14.tar.gz.asc

#配置镜像源
[root@K8s-ansible zookeeper]#cat repositories 
http://mirrors.aliyun.com/alpine/v3.6/main
http://mirrors.aliyun.com/alpine/v3.6/community

#准备jdk镜像 - 只有31M
[root@K8s-ansible zookeeper]#docker pull elevy/slim_java:8
8: Pulling from elevy/slim_java
88286f41530e: Pull complete 
7141511c4dad: Pull complete 
fd529fe251b3: Pull complete 
Digest: sha256:044e42fb89cda51e83701349a9b79e8117300f4841511ed853f73caf7fc98a51
Status: Downloaded newer image for elevy/slim_java:8
docker.io/elevy/slim_java:8
#上传到harbor
[root@K8s-ansible zookeeper]#docker tag elevy/slim_java:8 K8s-harbor01.mooreyxia.com/baseimages/slim_java:8
[root@K8s-ansible zookeeper]#docker push K8s-harbor01.mooreyxia.com/baseimages/slim_java:8
The push refers to repository [K8s-harbor01.mooreyxia.com/baseimages/slim_java]
e053edd72ca6: Pushed 
aba783efb1a4: Pushed 
5bef08742407: Pushed 
8: digest: sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604 size: 952

#准备zookeeper配置文件 - 单点相同的配置
[root@K8s-ansible zookeeper]#cat conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
#snapCount=100000
autopurge.purgeInterval=1
clientPort=2181
quorumListenOnAllIPs=true

#准备zookeeper集群配置 - 生成zookeeper的myid并且暴露2888:3888端口
[root@K8s-ansible zookeeper]#cat entrypoint.sh 
#!/bin/bash

echo ${MYID:-1} > /zookeeper/data/myid

if [ -n "$SERVERS" ]; then
    IFS=\, read -a servers <<<"$SERVERS"
    for i in "${!servers[@]}"; do 
        printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg
        #输出 server.1~3=zookeeper1-3:2888:3888
    done
fi

cd /zookeeper
exec "$@"

#zookeeper - log4j日志配置 
[root@K8s-ansible zookeeper]#cat conf/log4j.properties 
# Define some default values that can be overridden by system properties
zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/zookeeper/log
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=INFO
zookeeper.tracelog.dir=/zookeeper/log
zookeeper.tracelog.file=zookeeper_trace.log

#
# ZooKeeper Logging Configuration
#

# Format is "<default threshold> (, <appender>)+

# DEFAULT: console appender only
log4j.rootLogger=${zookeeper.root.logger}

# Example with rolling log file
#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE

# Example with rolling log file and tracing
#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE

#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.Cnotallow=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

#
# Add ROLLINGFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}

# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=5

log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.Cnotallow=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n


#
# Add TRACEFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}

log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.Cnotallow=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n


#准备镜像构建文件
[root@K8s-ansible zookeeper]#cat Dockerfile 
#FROM harbor-linux38.local.com/linux38/slim_java:8 
FROM K8s-harbor01.mooreyxia.com/baseimages/slim_java:8 #准备一个jdk到harbor

ENV ZK_VERSION 3.4.14 #zookeeper版本
ADD repositories /etc/apk/repositories #用alpine做系统镜像,提前配置镜像源
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    #
    # Verify the signature
    export GNUPGHOME="$(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-compnotallow=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "$GNUPGHOME"

COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:${PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010

#构建镜像并上传harbor
[root@K8s-ansible zookeeper]#cat build-command.sh 
#!/bin/bash
TAG=$1
docker build -t K8s-harbor01.mooreyxia.com/demo/zookeeper:${TAG} .
sleep 1
docker push  K8s-harbor01.mooreyxia.com/demo/zookeeper:${TAG}

[root@K8s-ansible zookeeper]#bash build-command.sh v3.4.14
...
Successfully built 4be1c51f39dd
Successfully tagged K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14
The push refers to repository [K8s-harbor01.mooreyxia.com/demo/zookeeper]
e562b485e113: Pushed 
471c0a089ec7: Pushed 
e9c1d174b408: Pushed 
bd3506eb3fca: Pushed 
479b1f22723a: Pushed 
0fdd215d56a7: Pushed 
240cfb0dce70: Pushed 
2c1db90485e1: Pushed 
e053edd72ca6: Mounted from baseimages/slim_java 
aba783efb1a4: Mounted from baseimages/slim_java 
5bef08742407: Mounted from baseimages/slim_java 
v3.4.14: digest: sha256:b6e3fe808f5740371d02b7755b0dc610fad5cea0eb127fe550c0fff33d81e54c size: 2621

  • 测试zookeeper镜像 - 此处省略,生产要确保镜像可用
  • 创建PV/PVC
#准备存储设备,这里用NFS
[root@K8s-haproxy01 ~]#mkdir -p /data/k8sdata/mooreyxia/zookeeper-datadir-1 
[root@K8s-haproxy01 ~]#mkdir -p /data/k8sdata/mooreyxia/zookeeper-datadir-2
[root@K8s-haproxy01 ~]#mkdir -p /data/k8sdata/mooreyxia/zookeeper-datadir-3
[root@K8s-haproxy01 ~]#cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#       to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#

/data/k8sdata *(rw,no_root_squash)
/data/volumes *(rw,no_root_squash)
[root@K8s-haproxy01 ~]#exportfs -avs
exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exporting *:/data/volumes
exporting *:/data/k8sdata

#准备PV数据卷 - 将存储设备映射为pv卷
[root@K8s-ansible pv]#cat zookeeper-persistentvolume.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce 
  nfs:
    server: 192.168.11.203
    path: /data/k8sdata/mooreyxia/zookeeper-datadir-1 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.11.203 
    path: /data/k8sdata/mooreyxia/zookeeper-datadir-2 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.11.203  
    path: /data/k8sdata/mooreyxia/zookeeper-datadir-3 

#创建PV
[root@K8s-ansible pv]#kubectl apply -f zookeeper-persistentvolume.yaml 
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created
#确认PV是否可用 - Available
[root@K8s-ansible pv]#kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                 STORAGECLASS            REASON   AGE
pvc-b5ae1f9c-8569-4645-8398-0571b6defa6c   500Mi      RWX            Retain           Bound       myserver/myserver-myapp-dynamic-pvc   mooreyxia-nfs-storage            4d9h
zookeeper-datadir-pv-1                     20Gi       RWO            Retain           Available                                                                          46s
zookeeper-datadir-pv-2                     20Gi       RWO            Retain           Available                                                                          46s
zookeeper-datadir-pv-3                     20Gi       RWO            Retain           Available                                                                          46s

#创建PVC给业务pod存储用
[root@K8s-ansible pv]#cat zookeeper-persistentvolumeclaim.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: mooreyxia
spec:
  accessModes:
    - ReadWriteOnce #设定读写模式
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 10Gi #设定存储使用上限
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: mooreyxia
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: mooreyxia
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 10Gi

#创建pvc
[root@K8s-ansible pv]#kubectl apply -f zookeeper-persistentvolumeclaim.yaml 
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created
#确认pvc - pv显示Bound
[root@K8s-ansible pv]#kubectl get pvc -n mooreyxia
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   20Gi       RWO                           30s
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   20Gi       RWO                           30s
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   20Gi       RWO                           30s
  • 运行zookeeper集群
#准备kubernetes对象控制器脚本 - 使用NodePort是集群外可用
#由于zookeeper集群自带数据同步,只需要使得zookeeper-pod的service互相访问即可自动同步数据,所以可以不使用statefulSet
[root@K8s-ansible zookeeper]#cat zookeeper1.yaml 
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: mooreyxia
spec:
  ports:
    - name: client
      port: 2181 #负载均衡入口
  selector:
    app: zookeeper #轮询转发至zookeeper1-3的service
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: mooreyxia
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: mooreyxia
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: mooreyxia
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 32183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper1
  namespace: mooreyxia
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:#配置文件中用到的环境变量
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3" #每个servers后面都有一个zookeeper
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-1 
      volumes:
        - name: zookeeper-datadir-pvc-1 
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper2
  namespace: mooreyxia
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "2"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-2 
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper3
  namespace: mooreyxia
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "3"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
           claimName: zookeeper-datadir-pvc-3

#创建zookeeper-pod
[root@K8s-ansible zookeeper]#kubectl apply -f zookeeper1.yaml 
service/zookeeper created
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created

#确认zookeeper-pod运行情况
[root@K8s-ansible zookeeper]#kubectl get pod -n mooreyxia|grep zookeeper
zookeeper1-67db986b9f-lxhlf                         1/1     Running   1 (3m2s ago)    3m28s
zookeeper2-6786d47d66-7kvql                         1/1     Running   1 (2m45s ago)   3m28s
zookeeper3-56b4f54865-xd2k8                         1/1     Running   1 (2m59s ago)   3m28s

#确认kubelet收集的报告 - describe
[root@K8s-ansible zookeeper]#kubectl describe pod zookeeper1-67db986b9f-lxhlf -n mooreyxia
Name:             zookeeper1-67db986b9f-lxhlf
Namespace:        mooreyxia
Priority:         0
Service Account:  default
Node:             192.168.11.215/192.168.11.215
Start Time:       Sun, 09 Apr 2023 14:39:40 +0000
Labels:           app=zookeeper
                  pod-template-hash=67db986b9f
                  server-id=1
Annotations:      <none>
Status:           Running
IP:               10.200.67.33
IPs:
  IP:           10.200.67.33
Controlled By:  ReplicaSet/zookeeper1-67db986b9f
Containers:
  server:
    Container ID:   containerd://79b0be34ddb9df62727282da761f80b7c4ec0ce37cf53bec1c8e5a2e0adc1613
    Image:          K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14
    Image ID:       K8s-harbor01.mooreyxia.com/demo/zookeeper@sha256:b6e3fe808f5740371d02b7755b0dc610fad5cea0eb127fe550c0fff33d81e54c
    Ports:          2181/TCP, 2888/TCP, 3888/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Sun, 09 Apr 2023 14:40:13 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 09 Apr 2023 14:40:04 +0000
      Finished:     Sun, 09 Apr 2023 14:40:06 +0000
    Ready:          True
    Restart Count:  1
    Environment:
      MYID:      1
      SERVERS:   zookeeper1,zookeeper2,zookeeper3
      JVMFLAGS:  -Xmx2G
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hd9r7 (ro)
      /zookeeper/data from zookeeper-datadir-pvc-1 (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  zookeeper-datadir-pvc-1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  zookeeper-datadir-pvc-1
    ReadOnly:   false
  kube-api-access-hd9r7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                    From               Message
  ----    ------     ----                   ----               -------
  Normal  Scheduled  4m13s                  default-scheduler  Successfully assigned mooreyxia/zookeeper1-67db986b9f-lxhlf to 192.168.11.215
  Normal  Pulled     3m49s                  kubelet            Successfully pulled image "K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14" in 21.024280603s (21.025309321s including waiting)
  Normal  Pulling    3m42s (x2 over 4m10s)  kubelet            Pulling image "K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14"
  Normal  Pulled     3m41s                  kubelet            Successfully pulled image "K8s-harbor01.mooreyxia.com/demo/zookeeper:v3.4.14" in 1.169990499s (1.170009649s including waiting)
  Normal  Created    3m40s (x2 over 3m49s)  kubelet            Created container server
  Normal  Started    3m40s (x2 over 3m48s)  kubelet            Started container server

#确认log日志没有错误
[root@K8s-ansible zookeeper]#kubectl logs zookeeper1-67db986b9f-lxhlf -n mooreyxia
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
...
  • 验证集群状态
#确认zookeeper-pod处于集群中 - 查看多个确保主从状态
[root@K8s-ansible zookeeper]#kubectl exec -it zookeeper1-67db986b9f-lxhlf bash -n mooreyxia
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower #是follower身份 如果是state-alone,则表示单机模式

#查看配置
bash-4.3# cat /zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
#snapCount=100000
autopurge.purgeInterval=1
clientPort=2181
quorumListenOnAllIPs=true
server.1=zookeeper1:2888:3888 #这里是通过程序生成的集群配置
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888

[root@K8s-ansible zookeeper]#kubectl exec -it zookeeper2-6786d47d66-7kvql bash -n mooreyxia
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower #是follower身份
bash-4.3# exit
exit

[root@K8s-ansible zookeeper]#kubectl exec -it zookeeper3-56b4f54865-xd2k8 bash -n mooreyxia
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader #是leader身份


#尝试连接到zookeeper中
#可连接至zookeeper 集群中的任意一台zookeeper 节点进行以下操作,zkCli.sh 默认连接本机,连接成功后即可进行数据更新
#zookeeper操作详情可参考我的zookeeper专题博客
bash-4.3# zkCli.sh -server 192.168.11.211:32181
Connecting to 192.168.11.211:32181
2023-04-09 15:03:57,442 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.versinotallow=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2023-04-09 15:03:57,447 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zookeeper3-56b4f54865-xd2k8
2023-04-09 15:03:57,447 [myid:] - INFO  [main:Environment@100] - Client environment:java.versinotallow=1.8.0_144
2023-04-09 15:03:57,451 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2023-04-09 15:03:57,452 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-oracle
2023-04-09 15:03:57,453 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/zookeeper/bin/../lib/log4j-1.2.17.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.14.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/bin/../conf:
2023-04-09 15:03:57,453 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-04-09 15:03:57,454 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2023-04-09 15:03:57,454 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2023-04-09 15:03:57,455 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2023-04-09 15:03:57,455 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2023-04-09 15:03:57,456 [myid:] - INFO  [main:Environment@100] - Client environment:os.versinotallow=5.15.0-69-generic
2023-04-09 15:03:57,456 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2023-04-09 15:03:57,457 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2023-04-09 15:03:57,457 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2023-04-09 15:03:57,459 [myid:] - INFO  [main:ZooKeeper@442] - Initiating client connection, cnotallow=192.168.11.211:32181 sessinotallow=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@1de0aca6
Welcome to ZooKeeper!

可以尝试下线zooker集群中的一个pod,禁用harbor,使得k8s无法自动创建,然后观察zookeeper的选举情况,此处省略

我是moore,大家一起加油!!!

标签:zookeeper,操作系统,86,Zookeeper,datadir,mooreyxia,K8s,root,name
From: https://blog.51cto.com/mooreyxia/6179268

相关文章

  • CF486D 题解
    题目传送门题目分析不算很难的树形\(\text{dp}\)。令\(dp_i\)表示以\(i\)为根的子树中联通子图的个数。在更新的时候,考虑儿子的联通子图和自己的,则有:\[dp_u=dp_u\times(dp_v+1)\]选根的时候将\(a\)最大的作为根节点。还要注意另外一点,就是当\(a_{fa}=a_{v}\)......
  • zookeeper
    一、Zookeeper概述1、Zookeeper定义Zookeeper是一个开源的分布式的,为分布式框架提供协调服务的Apache项目。2、Zookeeper工作机制Zookeeper从设计模式角度来理解:是一个基于观察者模式设计的分布式服务管理框架,它负责存储和管理大家都关心的数据,然后接受观察者的注册,一旦这些......
  • Codeforces Round 864 (Div. 2) C和D
    比赛地址C.LiHuaandChess题意:给出一个棋盘长宽n,m,有一颗棋子在棋盘上,向八个方向走一步的路程代价都为1,现在进行最多3次询问,问能否确认棋子的位置Solution第一次做交互题,想很好想,先询问(1,1),得到x,再询问(1+x,1+x),得到y,最后询问(1+x,1+x-y),如果得到的是0,则输出这个点,反之输......
  • Zookeeper集群
    一、Zookeeper概述1.Zookeeper定义及工作机制定义:Zookeeper是一个开源的分布式的,为分布式框架提供协调服务的Apache项目。工作机制:Zookeeper从设计模式角度来理解:是一个基于观察者模式设计的分布式服务管理框架,它负责存储和管理大家都关心的数据,然后接受观察者的注册,一旦这些......
  • Zookeeper分布式服务协调组件
     Zookeeper分布式服务协调组件 1.简介Zookeeper是一个分布式服务协调组件,是Hadoop、Hbase、Kafka重要的依赖组件,为分布式应用提供一致性服务的组件。Zookeeper是Hadoop、HBase、Kafka的重要依赖组件。Zookeeper主要包含文件系统以及通知机制两个部分。 2.模型......
  • 85-云原生操作系统-分层镜像构建并部署业务到Kubernetes集群生产案例
    在生产环境中业务迁移至Kubernetes环境都需要提前规划机房kubernetes集群部署基本步骤:机房环境搭建基础服务搭建系统迁移数据库迁移测试及联调使用服务及版本Pod地址规划端口使用统计业务迁移Nginx+Tomcat+NFS实现动静分离实现步骤:Centos基础环境镜像制作#准备安装包[root@K8s-a......
  • 练习记录-cf-div2-864(A-D)
    状态不怎么好场上就写出3道还磨磨蹭蹭推错结论qwq 警钟长鸣A.LiHuaandMaze一开始以为要切割发现就把其中一个包起来就行了计算包某个块需要的最小块数#include<bits/stdc++.h>#defineclosestd::ios::sync_with_stdio(false),cin.tie(0),cout.tie(0)usingn......
  • 操作系统(2.7.2)--线程(轻型进程)与进程(重型进程)的比较
    由于线程具有许多传统进程所具有的特征,所以又称之为轻型进程(Light-WeightProcess)或进程元把传统进程称为重型进程(Heavy-WeightProcess)。1)调度的基本单位在传统的操作系统中,作为拥有资源的基本单位和独立调度、分派的基本单位都是进程。在引入线程的操作系统中,则把线程作为调度和......
  • Codeforces Round 860 (Div. 2)
    CodeforcesRound860(Div.2)Date:04/08/2023Link:CodeforcesRound860(Div.2)A|ShowstopperBrief:两组数\(\{a_i\}\)和\(\{b_i\}\),长度都为\(n\).\(\foralli\),可以交换\(a_i\)和\(b_i\),问是否可以使得\(a_n=\max(a_i)\),\(b_n=\max(b_i......
  • 【cf864】赛后结
    属实是失踪人口了,想了一下还是把题解打到这儿。conteset地址:https://codeforces.com/contest/1797 A.题目大意:n*m的方格上给两个点,询问最少增加的障碍格子使得这两个点不连通。解题思路:水题,但是手速有点慢。直接问靠不靠墙,靠几面墙,不靠墙答案4,靠一面答案3,靠两面答案2,取两个点......