首页 > 其他分享 >第六周

第六周

时间:2023-02-26 11:58:12浏览次数:40  
标签:name tomcat -- app magedu 第六周 nginx

第六周相关

statefulset 扩是从前向后 0-1-2-3 ,缩是从后向前 3-2-1-0

创建statefulset

---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet 
metadata:
  name: myserver-myapp
  namespace: test
spec:
  replicas: 3
  serviceName: "myserver-myapp-service"
  selector:
    matchLabels:
      app: server-app-frontend
  template:
    metadata:
      labels:
        app: server-app-frontend
    spec:
      containers:
      - name: server-app-frontend
        image: harbor.jackedu.cn/secert/nginx:1.16.1-alpine-perl
        ports:
          - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: server-app-frontend
  namespace: test
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
  selector:
    app: server-app-frontend

root@k8s-deploy:~/k8s-data/yaml# kubectl get po -n test
NAME               READY   STATUS    RESTARTS   AGE
myserver-myapp-0   1/1     Running   0          15s
myserver-myapp-1   1/1     Running   0          9s
myserver-myapp-2   1/1     Running   0          3s

root@k8s-deploy:~/k8s-data/yaml# kubectl get svc  -n test
NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
server-app-frontend   ClusterIP   None         <none>        80/TCP    37s

访问时通过无头服务去访问,会解析成你实际的pod相关的IP地址

/ # ping server-app-frontend
PING server-app-frontend (172.20.169.172): 56 data bytes
64 bytes from 172.20.169.172: seq=0 ttl=62 time=2.393 ms
64 bytes from 172.20.169.172: seq=1 ttl=62 time=0.891 ms
^C
--- server-app-frontend ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.891/1.642/2.393 ms
/ # ping server-app-frontend
PING server-app-frontend (172.20.36.98): 56 data bytes
64 bytes from 172.20.36.98: seq=0 ttl=62 time=1.245 ms
^C
--- server-app-frontend ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.245/1.245/1.245 ms

部署prometheus node-exporter

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring 
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
        k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      containers:
      - image: harbor.jackedu.cn/library/node-exporter:v1.3.1 
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          protocol: TCP
          name: metrics
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host
          name: rootfs
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
      hostNetwork: true                      #在宿主机暴露端口
      hostPID: true                          #在宿主机暴露PID
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitoring 
spec:
  type: NodePort
  ports:
  - name: http
    port: 9100
    nodePort: 39100
    protocol: TCP
  selector:
    k8s-app: node-exporter
root@k8s-deploy:~/k8s-data/yaml# kubectl get pods -n monitoring
NAME                  READY   STATUS    RESTARTS   AGE
node-exporter-9rmkr   1/1     Running   0          85s
node-exporter-fdkxz   1/1     Running   0          85s
node-exporter-mwb7s   1/1     Running   0          85s
node-exporter-nmjsn   1/1     Running   0          85s

pod创建流程

POD常见报错及原因

Unschedulable: #Pod不能被调度,kube-scheduler没有匹配到合适的node节点

PodScheduled: #pod正处于调度中,在kube-scheduler刚开始高度的时候,还没有将pod分配到指定的node,在筛选出合适的节点后就会更新etcd数据,将pod分配到指定的node

Pending: #正在创建Pod但是Pod中的容器还没有全部被创建完成=[处于此状态的Pod应该检查Pod依赖的存储是否有权限挂载等]

Failed: #Pod中有容器启动失败而导致pod工作异常

Unknown #由于某种原因无法获得pod的当前状态,通常是由于与pod所在的node节点通信错误

Initalized #所有pod中的初始化容器己经完成了

ImagePullBackOff #Pod所在的node节点下载镜像失败

Running: #Pod内部的容器己经被创建并启动

Ready:#表示pod中的容器己经可以提供访问服务

Error:#pod启动过程中发生错误

NodeLost: #Pod所在节点失联

Waiting #Pod等待启动

Terminating #Pod正在被销毁

CrashLoopBackOff #pod,但是kubelet正在将它重启

InvalidIamgeName #node节点无法解析镜像名称导致的镜像无法下载

ImageInspectError #无法校验镜像,镜像不完整导致

ErrImageNeverPull #策略禁止拉取镜像,镜像中心权限是私有等

RegistryUnavaliable #镜像服务器不可用,网络原因或harbor宕机

ErrImagePull #镜像拉取出错,超时或下载被强制终止

CreateContainerConfigError #不能创建kubelet使用的容器配置

CreateContainerError #创建容器失败

RunContainerError #pod运行失败,容器中没有初始化PID为1的守护进程,

ContainerNotlnitialized #pod没有初始化完毕

ContainerNotReady #pod没有准备完毕

ContainerNotReady #pod没有准备完毕

ContainerCreating #pod正在创建中

PodInitializing #pod正在初始化中

DockerDaemonNotReady #node节点docker服务没有启动

NetworkPluginNotReady #网络插件没有启动

使用startupProbe、livenessProbe、readinessProbe探针对pod进行状态监测

探针类型

  • startupProbe #启动探针,kubernetes1.16引入

​ 判断容器内的应用程序是否己启动完成,如果配置了启动探测,则会先禁用所有其它的探测,直到startupProbe检测成功为止,如果startupProbe探测失败,则kubelet将杀死容器,容器将按照重启策略进行下一步操作,如果容器没有提供启动探测,则默认状态为成功

  • livenessProbe #存活探针

​ 检测容器是否正在运行,如果存活探测失败,则kubelet会杀死容器,并且容器将受到其重启策略的影响,如果容器不提供存活探针,则默认状态为Success,livenessProbe用于控制是否重启pod

  • readinessProbe #就绪探针

​ 如果就绪探测失败,端点控制将从与Pod匹配的所有Service的端点中删除该Pod的IP地址,初始延迟之前的就绪状态默认为Failure(失败),如果容器不提供就绪探针,则默认状态为Success,readinessProbe用于控制pod是否添加到service.

探针通用配置参数

  • initialDelaySeconds:120

​ #初始化延迟时间,告诉kubelet在执行第一次探测前应用等待多少秒,默认是0秒,最少值是0

  • periodSeconds:60

​ #探测周期间隔时间,指定了kubelet应该每多少秒执行一次存活探测,默认是10秒,最小值是1

  • timeoutSeconds: 5

​ #单次探测超时时间,探测的超时后等待多少秒,默认值是1秒,最小值1

  • successThreshold:1

​ #从失败转为成功的重试次数,探测器在失败后,被视为成功的最小连续成功数,默认值是1,存活探测的这个值必须是1,最小值是1

  • failureThreshold: 3

​ #从成功转为失败的重试次数,当Pod启动了并且探测到失败,k8s的重试次数,存活探测情况下的放弃意味着重启容器,就绪探测情况下的放弃Pod会被打上未就绪的标签,默认值是3,最小值是1.

探针http配置参数:

​ HTTP探测器可以在httpGet上配置额外的字段

- host

​ 连接使用的主机名,默认是Pod的IP,也可以在HTTP头设置"Host"来代替

  • scheme http

​ 用于设置连接主机的方式(HTTP还是HTTPS),默认是HTTP

  • path:/monitor/index.html

​ 访问HTTP服务的路径

  • httpHeaders:

​ #请求中自定议的HTTP头,HTTP头字段允许重复

  • port: 80

    访问容器的端口号或者端口名,如果数字必须在1~65535之间

postStart and preStop handlers简介

https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

示例

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp1
  labels:
    app: myserver-myapp1
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp1-label
  template:
    metadata:
      labels:
        app: myserver-myapp1-label
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: myserver-myapp1-label
        image: tomcat:7.0.94-alpine 
        lifecycle:
          postStart:
            exec:
             #command: 把自己注册到注册在中心
              command: ["/bin/sh", "-c", "echo 'Hello from the postStart handler' >> /usr/local/tomcat/webapps/ROOT/index.html"]

            #httpGet:
            #  #path: /monitor/monitor.html
            #  host: www.magedu.com
            #  port: 80
            #  scheme: HTTP
            #  path: index.html
          preStop:
            exec:
             #command: 把自己从注册中心移除
              command: ["/usr/local/tomcat/bin/catalina.sh","stop"]
        ports:
          - name: http
            containerPort: 8080
        startupProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 5 #首次检测延迟5s
          failureThreshold: 3  #从成功转为失败的次数
          periodSeconds: 3 #探测间隔周期
        readinessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3

---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp1-service
  namespace: test
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
    nodePort: 30088
    protocol: TCP
  type: NodePort
  selector:
    app: myserver-myapp1-label

Pod的终止流程

  • 创建pod

完成高度流程

容器启动并执行postStart

livenessProbe

进入running状态

readinessProbe

service关联pod

接受客户端请求

  • 删除pod

Pod被设置为“Terminating”状态,从service的Endpoints列表中删除并不再接受客户端请求

执行PreStop

k8s向pod中的容器发送SIGTERM信号(正常终止信号)终止pod里面的主进程,这个信号让容器知道自己很快将会关闭

terminationGracePeriodSeconds: 60#可选终止等待期,如果有设置删除宽限时间,则等待宽限时间到期,否则最多等待30s.

k8s等待指定的时间称为优雅终止宽限期,默认情况下是30秒,值得注意的是等待期与preStop Hook和SIGTERM信号并行执行

k8s可能不会等待preStop Hook完成(最长30秒之后主进程还没有结束就强终止Pod)

SIGKILL信号被发送到Pod,并删除Pod

nerdctl + buildkitd构建容器镜像

buildkitd组成部分

buildkitd(服务端),目前支持runc和containerd作为镜像构建环境,默认是runc,可以更换为containerd
buildctl(客户端),负责解析Dockerfile文件,并向服务端buildkitd发出构建请求。

1)部署buildkitd

cd /usr/local/src/
wget https://github.com/moby/buildkit/releases/download/v0.10.3/buildkit-v0.10.3.linux-arm64.tar.gz
tar -xvf buildkit-v0.10.3.linux-amd64.tar.gz -C /usr/local/bin
mv /usr/local/bin/bin/buildctl /usr/local/bin/bin/buildkitd /usr/local/bin
vim /lib/systemd/system/buildkit.socket
[Unit]
Description=BuildKit
Documention=https://github.com/moby/buildkit

[Socket]
ListenStream=%t/buildkit/buildkitd.sock

[Install]
WantedBy=sockets.target

vim /lib/systemd/system/buildkitd.service
root@k8s-master1:/usr/local/src# vim /lib/systemd/system/buildkitd.service

[Unit]
Description=BuildKit
Require=buildkit.socket
After=buildkit.socketDocumention=https://github.com/moby/buildkit

[Service]
ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable buildkitd
systemctl start buildkitd
systemctl status buildkitd
vim /etc/profile
source <(nerdctl completion bash)
nerdctl login --insecure-registry harbor.jackedu.cn
nerdctl pull centos:7.9.2009


harbor证书分发

harbor主机操作
cd /apps/harbor/certs
openssl x509 -inform PEM -in jackedu.net.crt -out jackedu.net.cert

在打镜像主机操作
mkdir -p /etc/containerd/certs.d/harbor.jackedu.net/

harbor主机操作
scp ca.crt jackedu.net.cert jackedu.net.key 192.168.44.12:/etc/containerd/certs.d/harbor.jackedu.net/


nerdctl login harbor.jackedu.net
Enter Username: admin
Enter Password: 
WARNING: Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


2)镜像构建

cd /opt/dockerfile/ubuntu
root@k8s-master1:/opt/dockerfile/ubuntu# ll
total 1108
drwxr-xr-x 3 root root     148 Feb 19 19:08 ./
drwxr-xr-x 3 root root      58 Feb 19 19:02 ../
-rw-r--r-- 1 root root     885 Aug  5  2022 Dockerfile
-rw-r--r-- 1 root root     240 Feb 19 19:08 build-command.sh
-rw-r--r-- 1 root root   38751 Aug  5  2022 frontend.tar.gz
drwxr-xr-x 3 root root      38 Aug  5  2022 html/
-rw-r--r-- 1 root root 1073322 May 24  2022 nginx-1.22.0.tar.gz
-rw-r--r-- 1 root root    2812 Oct  3  2020 nginx.conf
-rw-r--r-- 1 root root    1139 Aug  5  2022 sources.list
root@k8s-master1:/opt/dockerfile/ubuntu# cat build-command.sh 
root@k8s-master1:/opt/dockerfile/ubuntu# cat build-command.sh 
#!/bin/bash
#docker build -t harbor.magedu.net/myserver/nginx:v1 .
#docker push harbor.magedu.net/myserver/nginx:v1

nerdctl build -t harbor.magedu.net/magedu/nginx-base:1.22.0 .

nerdctl push harbor.magedu.net/magedu/nginx-base:1.22.0

2)解决nerdctl构建镜像时需要https证书问题

如果依赖本地镜像做二次构建时不可以走https,需要走http,

需要搭建nginx,harbor只保留80,nginx来代理443

cd /usr/local/src
tar xvf nginx-1.22.0.tar.gz
cd nginx-1.22.0
./configure --prefix=/apps/nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module
make && make install
创建证书目录并拷贝harbor的证书
mkdir /apps/nginx/certs
root@k8s-harbor:/apps/harbor/certs# scp jackedu.net.crt jackedu.net.key 192.168.44.11:/apps/nginx/certs

vim /apps/nginx/conf/nginx.conf
client_max_body_size 1000m;
    server {
        listen       80;
        #server_name  localhost;
        listen 443 ssl;
        server_name  harbor.jackedu.net;
        ssl_certificate /apps/nginx/certs/jackedu.net.crt;
        ssl_certificate_key /apps/nginx/certs/jackedu.net.key;
        ssl_session_timeout 20m;
        ssl_session_cache    shared:sslcache:20m;
        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
              proxy_pass http://192.168.40.11;
        }
    


3) buildkitd配置文件:

root@k8s-master1:/opt/dockerfile/ubuntu# cat /etc/buildkit/buildkitd.toml 
[registry."harbor.jackedu.net"]
  http = true
  insecure = true

4) nerdctl配置文件

root@k8s-master1:/opt/dockerfile/ubuntu# cat /etc/buildkit/buildkitd.toml 
[registry."harbor.jackedu.net"]
  http = true
  insecure = true

自定义镜像运行Nginx及Java服务并基于NAS实现动静分离

1.构建java镜像

基于centos:7.9.2009镜像构建

#JDK Base Image
FROM harbor.jackedu.net/baseimage/centos:7.9.2009
#FROM centos:7.9.2009



ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk 
ADD profile /etc/profile


ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin

2.构建tomcat镜像

#Tomcat 8.5.43基础镜像
FROM harbor.jackedu.net/baseimage/jdk-base:v8.212


RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz  /apps
RUN useradd tomcat -u 2050 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R

镜像测试

nerdctl run -it --rm harbor.jackedu.net/baseimage/tomcat-base:v8.5.43 bash

3.构建应用镜像

#tomcat web1
FROM harbor.jackedu.net/baseimage/tomcat-base:v8.5.43 

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
#ADD filebeat.yml /etc/filebeat/filebeat.yml 
RUN useradd nginx
RUN chown  -R nginx.nginx /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]

4.部署到k8s

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app2-deployment-label
  name: magedu-tomcat-app2-deployment
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-tomcat-app2-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app2-selector
    spec:
      containers:
      - name: magedu-tomcat-app2-container
        image: harbor.magedu.net/magedu/tomcat-app2:v1 
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
        volumeMounts:
        - name: magedu-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: magedu-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: magedu-images
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/magedu/images
      - name: magedu-static
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/magedu/static
#      nodeSelector:
#        project: magedu
#        app: tomcat
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-tomcat-app2-service-label
  name: magedu-tomcat-app2-service
  namespace: magedu
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    #nodePort: 40003
  selector:
    app: magedu-tomcat-app2-selector

2.构建nginx镜像

FROM harbor.jackedu.net/baseimage/nginx-base:1.22.0

ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz  /usr/local/nginx/html/webapp/
ADD index.html  /usr/local/nginx/html/index.html

#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images

EXPOSE 80 443

CMD ["nginx"]

nginx.conf中配置后端tomcat的svc

将nginx.conf中的daemon off修改为off,用于nginx在前台启动,在后台启动镜像将无法启动

upstream  tomcat_webserver {
        server  magedu-tomcat-app1-service.test.svc.cluster.local:80; 
}

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        location /webapp {
            root   html;

部署到k8s将nginx

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-nginx-deployment-label
  name: magedu-nginx-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magedu-nginx-selector
  template:
    metadata:
      labels:
        app: magedu-nginx-selector
    spec:
      containers:
      - name: magedu-nginx-container
        image: harbor.jackedu.net/app/nginx-web1:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 256Mi

        volumeMounts:
        - name: magedu-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: magedu-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: magedu-images
        nfs:
          server: 192.168.44.11
          path: /data/k8sdata/magedu/images 
      - name: magedu-static
        nfs:
          server: 192.168.44.11
          path: /data/k8sdata/magedu/static
      #nodeSelector:
      #  group: magedu

    

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-nginx-service-label
  name: magedu-nginx-service
  namespace: test
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30090
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30091
  selector:
    app: magedu-nginx-selector

测试在镜像下载图片可以正常显示

3.构建zookeeper镜像

下载镜像slim_java:8

nerdctl pull elevy/slim_java:8
nerdctl tag elevy/slim_java:8 harbor.jackedu.net/baseimage/slim_java:8
nerdctl push harbor.jackedu.net/baseimage/slim_java:8

构建zookeeper镜像

FROM harbor.jackedu.net/baseimage/slim_java:8 

ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories 
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    #
    # Verify the signature
    export GNUPGHOME="$(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "$GNUPGHOME"

COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:${PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010

部署

先在nfs上创建三个目录

mkdir -p /data/k8sdata/magedu/zookeeper-datadir-1
mkdir -p /data/k8sdata/magedu/zookeeper-datadir-2
mkdir -p /data/k8sdata/magedu/zookeeper-datadir-3

执行创建pv的操作

kubectl apply -f zookeeper-persistentvolume.yaml
root@k8s-master1:/opt/dockerfile/k8s-data/yaml/magedu/zookeeper/pv# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                STORAGECLASS          REASON   AGE
pvc-d77d3e00-d42e-4508-b420-45474c8cdd16   500Mi      RWX            Retain           Bound       default/myserver-myapp-dynamic-pvc   managed-nfs-storage            55d
zookeeper-datadir-pv-1                     10Gi       RWO            Retain           Available                                                                       12s
zookeeper-datadir-pv-2                     10Gi       RWO            Retain           Available                                                                       12s
zookeeper-datadir-pv-3                     10Gi       RWO            Retain           Available

执行创建pvc的操作

kubectl apply -f zookeeper-persistentvolumeclaim.yaml
root@k8s-master1:/opt/dockerfile/k8s-data/yaml/magedu/zookeeper/pv# kubectl get pvc -n test
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   10Gi       RWO                           12s
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   10Gi       RWO                           12s
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   10Gi       RWO                           12s

执行创建zookeeper

kubectl apply -f zookeeper.yaml

验证

登zk后执行命令可见
bash-4.3# ./zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

通过工具prettyzoo添加test成功,通过验证

标签:name,tomcat,--,app,magedu,第六周,nginx
From: https://www.cnblogs.com/jackwu81/p/17156359.html

相关文章

  • SMU 冬令营第六周蓝桥杯模拟赛
    A.刷题统计题目:小明决定从下周一开始努力刷题准备蓝桥杯竞赛。他计划周一至周五每天做a道题目,周六和周日每天做b道题目。请你帮小明计算,按照计划他将在第几天实现做......
  • 寒假训练第六周
    寒假训练第六周第一天蓝桥杯训练斐波拉契数组大意:给定数组A,为最少修改几个元素后,数组A会变成一个斐波拉契数组。思路:斐波那契数组在第 30 项左右就超过 1e6了。......
  • Leetcode刷题第六周
    77、组合classSolution{publicList<List<Integer>>result=newArrayList<List<Integer>>();publicList<Integer>temp=newLinkedList<>();publ......
  • 编程路-基础提升-第六周-02
    函数进阶编程真正常用的是基于简单函数的复杂函数,需要对其真正理解和熟悉后,并经常应用才能牢记并会使用。变量作用域 概括可变数据类型使用方法或者全局声明可以......
  • 编程路-基础提升-第六周-01
    函数入门函数时编程中重要的内容,是基础中的重点,也是高阶的基础,所以要掌握明白了解。函数基础变量.操作():方法例如:a.sum()变量.操作():属性例如:a.name操作():函数例如:s......
  • python学习第六周总结
    封装封装:就是将数据和功能'封装'起来隐藏:在类的定义阶段名字前面使用两个下划线表示隐藏。就是将数据和功能隐藏起来不让用户直接调用,而是开发一些接口间接调用,从而可......
  • CodeStar第六周周赛普及进阶组
    T1:倍数序列3本题难度中等,思路和LIS类似,用dp[i]表示以\(a_i\)结尾的倍数序列的个数。如果\(a_i\)是\(a_j\)的倍数,倍数序列个数就是\(dp[j]\),枚举所有\(j\)求......
  • 第六周总结
    面向对象目录面向对象核心思路前戏之人狗大战编程思想类与对象类与对象的创建清华大学学生选课系统对象的独有数据对象的独有方法动静态方法继承的概念继承的本质名字的查......
  • 第六周总结
    目录编程思想面向对象之类与对象动静方法继承的本质名字的查找顺序经典类与新式类派生方法编程思想1.面向过程编程:按照固定的流程解决问题。2.面向对象编程:数据与功能的......
  • 第六周总结
    第六周总结面向对象与类面向对象概述面向对象可以说是一种编程思想,我们之前所学习的变成可以称之为面向过程编程,就是按照流程一步一来,但是面向对象不是。首先面向对象在......