一、无状态应用管理Deployment
1-1、创建Deployment
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 5 selector: matchLabels: app: nginx template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:1.16.1-alpine name: nginx
注:从Kubernetes1.16版本开始,彻底废弃了其它的APIversion,只能使用apps/v1,1.16以下的版本可以使用extension等
示例解析:
1. nginx-deployment:Deployment的名称;
2. replicas: 创建Pod的副本数;
3. selector:定义Deployment如何找到要管理的Pod,与template的label(标签)对应,apiVersion为apps/v1必须指定该字段;
4. template字段包含以下字段: app: nginx使用label(标签)标记Pod; spec:表示Pod运行一个名字为nginx的容器; image:运行此Pod使用的镜像; Port:容器用于发送和接收流量的端口。- 使用kubectl create 创建此Deployment
[root@k8s-master1 opt]# kubectl create -f nginx_deploy.yaml deployment.apps/nginx created
- 使用kubect get 或者kubect describe查看此Deployment状态
[root@k8s-master1 opt]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 76s [root@k8s-master1 opt]# kubectl describe deploy nginx Name: nginx Namespace: default CreationTimestamp: Sat, 17 Sep 2022 10:30:42 +0800 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1-alpine Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-99c78bb77 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 101s deployment-controller Scaled up replica set nginx-99c78bb77 to 3
注:kubectl get deploy 命令运行后相关参数注解:
➢ NAME:集群中Deployment的名称;、
➢ READY:Pod就绪个数和总副本数;
➢ UP-TO-DATE:显示已达到期望状态的被更新的副本数;
➢ AVAILABLE:显示用户可以使用的应用程序副本数,当前为0,说明目前还没有达到期望的Pod;
➢ AGE:显示应用程序运行的时间。
- 查看此Deployment当前对应的ReplicaSet
[root@k8s-master1 opt]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-99c78bb77 3 3 3 6m20s
➢ DESIRED:应用程序副本数;
➢ CURRENT:当前正在运行的副本数;
注:当 Deployment 有过更新,对应的 RS 可能不止一个,可以通过-o yaml 获取当前对应的 RS是哪个, 其余的 RS 为保留的历史版本,用于回滚等操作。- 查看当前Deployment创建的Pod,可以看到Pod的hash值99c78bb77和上述Deployment对应得ReplicaSet的hash相同
[root@k8s-master1 opt]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-99c78bb77-ndd99 1/1 Running 0 10m app=nginx,pod-template-hash=99c78bb77 nginx-99c78bb77-s4fr9 1/1 Running 0 10m app=nginx,pod-template-hash=99c78bb77 nginx-99c78bb77-tgjk9 1/1 Running 0 10m app=nginx,pod-template-hash=99c78bb77
1-2、更新Deployment
注:当且仅当 Deployment 的 Pod 模板(即.spec.template)更改时,才会触发 Deployment更新,
例如更改内存、CPU 配置或者容器的 image
- 假如更新Nginx Pod的image使用nginx:latest, 并使用 --record记录当前更改参数,后期回滚时可以查看对应的信息
- 把现在的nginx:1.16.1-alpine更新为nginx:1.17.1-alpine操作命令如下:
[root@k8s-master1 opt]# kubectl set image deployment nginx nginx=nginx:1.17.1-alpine --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginx image updated
- 可以使用 edit命令之间编辑Deployment
[root@k8s-master1 opt]# kubectl edit deploy nginx
- 更新的同时可以使用kubectl rollout status查看更新状态
[root@k8s-master1 ~]# kubectl rollout status deploy nginx Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 1 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 2 out of 3 new replicas have been updated... Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination... deployment "nginx" successfully rolled out
注:可以看出更新过程为新旧交替更新,首先新建一个 Pod,当 Pod 状态为 Running 时,删除一个旧的 Pod,
同时再创建一个新的 Pod。当触发一个更新后,会有新的 ReplicaSet 产生,旧的ReplicaSet 会被保存,
查看此时 ReplicaSet,可以从 AGE 或 READY 看出来新旧 ReplicaSet:
[root@k8s-master1 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-6dbbf96dd 3 3 3 5m34s nginx-99c78bb77 0 0 0 21m
- 通过kubectl describe查看详细的更新信息
Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.17.1-alpine Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-6dbbf96dd (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-99c78bb77 to 3 Normal ScalingReplicaSet 6m48s deployment-controller Scaled up replica set nginx-6dbbf96dd to 1 Normal ScalingReplicaSet 6m45s deployment-controller Scaled down replica set nginx-99c78bb77 to 2 Normal ScalingReplicaSet 6m45s deployment-controller Scaled up replica set nginx-6dbbf96dd to 2 Normal ScalingReplicaSet 6m42s deployment-controller Scaled down replica set nginx-99c78bb77 to 1 Normal ScalingReplicaSet 6m42s deployment-controller Scaled up replica set nginx-6dbbf96dd to 3 Normal ScalingReplicaSet 6m39s deployment-controller Scaled down replica set nginx-99c78bb77 to 0
注:
在 describe 中可以看出,第一次创建时,它创建了一个名为 nginx-99c78bb77 的
ReplicaSet,并直接将其扩展为 3 个副本。更新部署时,它创建了一个新的 ReplicaSet,命名为
nginx-6dbbf96dd,并将其副本数扩展为 1,然后将旧的 ReplicaSet 缩小为 2,这样至
少可以有 2 个 Pod 可用,最多创建了 4 个 Pod。以此类推,使用相同的滚动更新策略向上和向下
扩展新旧 ReplicaSet,最终新的 ReplicaSet 可以拥有 3 个副本,并将旧的 ReplicaSet 缩小为 0
1-3、回滚Deployment 说明:当某个更新的版本不稳定或配置不合理时,可以对其进行回滚操作- 首先利用kubectl rollout history 查看更新历史
[root@k8s-master1 ~]# kubectl rollout history deploy nginx deployment.apps/nginx REVISION CHANGE-CAUSE 1 <none> 2 kubectl set image deployment nginx nginx=nginx:1.17.1-alpine --record=true
- 查看Deployment某次更新详细信息,使用 --revision指定某次更新版本号
[root@k8s-master1 ~]# kubectl rollout history deploy nginx --revision=2 deployment.apps/nginx with revision #2 Pod Template: Labels: app=nginx pod-template-hash=6dbbf96dd Annotations: kubernetes.io/change-cause: kubectl set image deployment nginx nginx=nginx:1.17.1-alpine --record=true Containers: nginx: Image: nginx:1.17.1-alpine Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none>
- 回滚到上一个稳定版本,使用kubectl rollout undo即可
[root@k8s-master1 opt]# kubectl rollout undo deploy nginx deployment.apps/nginx rolled back
- 再次查看历史记录,或者利用kubectl describe 命令查看
[root@k8s-master1 ~]# kubectl rollout history deploy nginx deployment.apps/nginx REVISION CHANGE-CAUSE 2 kubectl set image deployment nginx nginx=nginx:1.17.1-alpine --record=true 3 <none> #查看日志记录 kubectl rollout history deploy nginx #日志信息显示 Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1-alpine Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-99c78bb77 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 36m deployment-controller Scaled up replica set nginx-6dbbf96dd to 1 Normal ScalingReplicaSet 36m deployment-controller Scaled down replica set nginx-99c78bb77 to 2 Normal ScalingReplicaSet 36m deployment-controller Scaled up replica set nginx-6dbbf96dd to 2 Normal ScalingReplicaSet 36m deployment-controller Scaled down replica set nginx-99c78bb77 to 1 Normal ScalingReplicaSet 36m deployment-controller Scaled up replica set nginx-6dbbf96dd to 3 Normal ScalingReplicaSet 36m deployment-controller Scaled down replica set nginx-99c78bb77 to 0 Normal ScalingReplicaSet 49s deployment-controller Scaled up replica set nginx-99c78bb77 to 1 Normal ScalingReplicaSet 47s deployment-controller Scaled down replica set nginx-6dbbf96dd to 2 Normal ScalingReplicaSet 47s deployment-controller Scaled up replica set nginx-99c78bb77 to 2 Normal ScalingReplicaSet 43s (x2 over 37m) deployment-controller Scaled up replica set nginx-99c78bb77 to 3 Normal ScalingReplicaSet 43s deployment-controller Scaled down replica set nginx-6dbbf96dd to 1 Normal ScalingReplicaSet 39s deployment-controller Scaled down replica set nginx-6dbbf96dd to 0
注:通过上面的命令查看已经回滚到了上一个版本
- 指定回滚到指定的版本,使用 --to-revision=指定的回滚版本号
kubectl rollout undo deploy nginx --to-revision=2
1-4、扩容Deployment
当访问量变大,或者有预期内的活动时,三个Pod可能已无法支撑业务时,可以提前对其进行扩展副本数量
- 使用kubectl scale动态调整Pod的副本数,比如增加Pod数量为5
[root@k8s-master1 opt]# kubectl scale deployment nginx --replicas=5 deployment.apps/nginx scaled
- 查看扩展后副本数量
[root@k8s-master1 opt]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 5/5 5 5 48m [root@k8s-master1 opt]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-99c78bb77-7zp77 1/1 Running 0 2m21s nginx-99c78bb77-l9qgx 1/1 Running 0 13m nginx-99c78bb77-nnksl 1/1 Running 0 13m nginx-99c78bb77-npcw9 1/1 Running 0 13m nginx-99c78bb77-sdgxp 1/1 Running 0 2m21s
注:此时已经有3个副本扩展到了5个副本
1-5、更新Deployment注意事项
历史版本清理策略:
在默认情况下,revision 保留 10 个旧的 ReplicaSet,其余的将在后台进行垃圾回收,可以
在.spec.revisionHistoryLimit 设置保留 ReplicaSet 的个数。当设置为 0 时,不保留历史记录。
更新策略: 1). .spec.strategy.type==Recreate,表示重建,先删掉旧的Pod再创建新的Pod; 2). .spec.strategy.type==RollingUpdate,表示滚动更新,可以指定maxUnavailable和maxSurge 来控制滚动更新过程; 1).spec.strategy.rollingUpdate.maxUnavailable,指定在回滚更新时最大不可用的Pod数量, 可选字段,默认为25%,可以设置为数字或百分比,如果maxSurge为0,则该值不能为0; 2).spec.strategy.rollingUpdate.maxSurge可以超过期望值的最大Pod数,可选字段,默认为 25%,可以设置成数字或百分比,如果maxUnavailable为0,则该值不能为0。
二、有状态应用管理StatefulSet
2-1、创建一个StatefulSet资源文件
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # 必须匹配 .spec.template.metadata.labels serviceName: "nginx" replicas: 3 # 默认值是 1 minReadySeconds: 10 # 默认值是 0 template: metadata: labels: app: nginx # 必须匹配 .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.16.1-alpine ports: - containerPort: 80 name: web
注:
➢ kind: Service定义了一个名字为Nginx的Headless Service,创建的Service格式为 web-0.nginx.default.svc.cluster.local,其他的类似,因为没有指定Namespace(命名空间), 所以默认部署在default; ➢ kind: StatefulSet定义了一个名字为web的StatefulSet,replicas表示部署Pod的副本数,本实例为2。 ◆ 在 StatefulSet 中 必 须 设 置 Pod 选择器(.spec.selector ) 用 来 匹 配 其 标 签( .spec.template.metadata.labels)。在 1.8 版本之前,如果未配置该字段(.spec.selector), 将被设置为默认值,在 1.8 版本之后,如果未指定匹配 Pod Selector,则会导致StatefulSet 创建错误。 当 StatefulSet 控制器创建 Pod 时,它会添加一个标签 statefulset.kubernetes.io/pod-name,该 标签的值为 Pod 的名称,用于匹配 Service。2-2、创建StatefulSet
[root@k8s-master1 opt]# kubectl create -f nginx_sts.yaml service/nginx created statefulset.apps/web created [root@k8s-master1 opt]# kubectl get sts NAME READY AGE web 3/3 16m [root@k8s-master1 opt]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 9d nginx ClusterIP None <none> 80/TCP 15m [root@k8s-master1 opt]# kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 17m web-1 1/1 Running 0 17m web-2 1/1 Running 0 17m
2-3、StatefulSet创建pod流程
StatefulSet 管理的 Pod 部署和扩展规则如下: 1. 对于具有N个副本的StatefulSet,将按顺序从0到N-1开始创建Pod; 2. 当删除Pod时,将按照N-1到0的反顺序终止; 3. 在缩放Pod之前,必须保证当前的Pod是Running(运行中)或者Ready(就绪); 4. 在终止Pod之前,它所有的继任者必须是完全关闭状态。 ➢ StatefulSet 的 pod.Spec.TerminationGracePeriodSeconds(终止 Pod 的等待时间)不应该指定 为 0,设置为 0 对 StatefulSet 的 Pod 是极其不安全的做法,优雅地删除 StatefulSet 的 Pod 是非常 有必要的,而且是安全的,因为它可以确保在 Kubelet 从 APIServer 删除之前,让 Pod 正常关闭。 ➢ 当创建上面的 Nginx 实例时,Pod 将按 web-0、web-1、web-2 的顺序部署 3 个 Pod。在 web- 0 处于 Running 或者 Ready 之前,web-1 不会被部署,相同的,web-2 在 web-1 未处于 Running 和 Ready 之前也不会被部署。如果在 web-1 处于 Running 和 Ready 状态时,web-0 变成 Failed (失败)状态,那么 web-2 将不会被启动,直到 web-0 恢复为 Running 和 Ready 状态。 ➢ 当创建上面的 Nginx 实例时,Pod 将按 web-0、web-1、web-2 的顺序部署 3 个 Pod。在 web- 0 处于 Running 或者 Ready 之前,web-1 不会被部署,相同的,web-2 在 web-1 未处于 Running 和 Ready 之前也不会被部署。如果在 web-1 处于 Running 和 Ready 状态时,web-0 变成 Failed (失败)状态,那么 web-2 将不会被启动,直到 web-0 恢复为 Running 和 Ready 状态。2-4、StatefulSet扩容
- 扩容
[root@k8s-master1 opt]# kubectl scale sts web --replicas=5 statefulset.apps/web scaled
- 查看扩容状态
[root@k8s-master1 opt]# kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 27m web-1 1/1 Running 0 27m web-2 1/1 Running 0 26m web-3 1/1 Running 0 24s web-4 1/1 Running 0 4s #也可以使用下面的命令查看 [root@k8s-master1 opt]# kubectl get pod -w -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 29m web-1 1/1 Running 0 28m web-2 1/1 Running 0 28m web-3 1/1 Running 0 118s web-4 1/1 Running 0 98s
注:StatefulSet扩容跟Deployment扩容一样,可以查看Deployment相关扩容信息讲解
2-5、StatefulSet更新策略(默认情况使用的是RollingUpdate)
1)On Delete 策略
OnDelete 更新策略实现了传统(1.7 版本之前)的行为,它也是默认的更新策略。当我们选
择这个更新策略并修改 StatefulSet 的.spec.template 字段时,StatefulSet 控制器不会自动更新 Pod,
必须手动删除 Pod 才能使控制器创建新的 Pod。
2)RollingUpdate 策略
RollingUpdate(滚动更新)更新策略会自动更新一个 StatefulSet 中所有的 Pod,采用与序号
索引相反的顺序进行滚动更新
- 关于OnDelete策略设置
updateStrategy: type: OnDelete
注:OnDelete策略yaml文件设置是跟serviceName设置同级别
-
RollingUpdate策略
updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate
注:RollingUpdate策略yaml文件设置跟serviceName设置同级别
- 利用RollingUpdate策略更新,
[root@k8s-master1 opt]# kubectl set image sts web nginx=nginx:1.17.1-alpine --record Flag --record has been deprecated, --record will be removed in the future statefulset.apps/web image updated #查看更新后状态 [root@k8s-master1 ~]# kubectl get po -l app=nginx -oyaml | grep image: - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine
注:这里已经把nginx:1.16.1-alpine更新为nginx:1.17.1-alpine
- 利用OnDelete测试更新
#更新 [root@k8s-master1 opt]# kubectl set image sts web nginx=nginx:1.17.1-alpine --record Flag --record has been deprecated, --record will be removed in the future statefulset.apps/web image updated #删除web-2进行更新操作 [root@k8s-master1 opt]# kubectl delete pod web-2 pod "web-2" deleted #查看web-2更新后的状态 [root@k8s-master1 opt]# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-0 1/1 Running 0 7m16s 172.24.247.53 node-2 <none> <none> web-1 1/1 Running 0 6m56s 172.23.119.172 node-1 <none> <none> web-2 1/1 Running 0 8s 172.25.115.74 k8s-master2 <none> <none> [root@k8s-master1 opt]# kubectl get pod -l app=nginx -oyaml | grep image: - image: nginx:1.16.1-alpine image: docker.io/library/nginx:1.16.1-alpine - image: nginx:1.16.1-alpine image: docker.io/library/nginx:1.16.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine
- 分段更新。注:分段更新只会在RollingUpdate可以使用,分段是更新需要修改partition: 这里的数值
#修改partition参数值 [root@k8s-master1 ~]# kubectl patch sts web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate", "rollingUpdate":{"partition":2}}}}' statefulset.apps/web patched #把1.16.1镜像更新成1.17.1镜像 [root@k8s-master1 ~]# kubectl set image sts web nginx=nginx:1.17.1-alpine statefulset.apps/web image updated #查看后更新状态 [root@k8s-master1 opt]# kubectl get pod -l app=nginx -oyaml |grep image: - image: nginx:1.16.1-alpine image: docker.io/library/nginx:1.16.1-alpine - image: nginx:1.16.1-alpine image: docker.io/library/nginx:1.16.1-alpine - image: nginx:1.17.1-alpine image: docker.io/library/nginx:1.17.1-alpine
注:这里把把partition: 0 改成partition: 2 在更新镜像的时候只会更新web-2。因为只有3个web pod。所以修改2以后的pod。web-0,web-1不会更新
- 修改partition值也可以使用 kubectl edit 找到partition参数进行编辑修改
kubectl edit sts web
标签:web,Kubernetes,kubectl,image,调度,nginx,deployment,Pod,资源 From: https://www.cnblogs.com/albert919/p/16702057.html