Deployment
用于部署无状态的服务。一般不直接管理Pod或者ReplicaSet。
创建 Deployment
Deployment 文件
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx #deployment 的标签
spec:
replicas: 3 #副本数
selector:
matchLabels:
app: nginx # 定义了 Deployment 如何查找要管理的 Pod。
template:
metadata:
labels:
app: nginx # pod容器的标签
spec:
containers:
- name: nginx # 容器名字
image: nginx:1.14.2
ports:
- containerPort: 80
通过文件创建
[root@master01 deployment]# ls
nginx-deployment.yaml
[root@master01 deployment]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
kubectl apply -f nginx-deployment.yaml
查看创建状态
[root@master01 deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/3 3 2 30s
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/3 3 2 33s
[root@master01 deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/3 3 2 34s
[root@master01 deployment]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 35s
查看 Deployment 上线状态
[root@master01 deployment]# kubectl rollout status deployment nginx-deployment
deployment "nginx-deployment" successfully rolled out
查看 Deployment 创建的 ReplicaSet(rs)
[root@master01 deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-66b6c48dd5 3 3 3 3m43s
注意 ReplicaSet 的名称始终被格式化为[Deployment名称]-[哈希]。 其中的哈希字符串与 ReplicaSet 上的 pod-template-hash 标签一致
[root@master01 deployment]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 2d20h
nginx-deployment-66b6c48dd5-d5snq 1/1 Running 0 4m59s
nginx-deployment-66b6c48dd5-rh5v9 1/1 Running 0 4m59s
nginx-deployment-66b6c48dd5-rqzrf 1/1 Running 0 4m59s
查看每个 Pod 自动生成的标签
[root@master01 deployment]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 1 2d20h app=nginx,role=frontend
nginx-deployment-66b6c48dd5-d5snq 1/1 Running 0 6m8s app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-rh5v9 1/1 Running 0 6m8s app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-rqzrf 1/1 Running 0 6m8s app=nginx,pod-template-hash=66b6c48dd5
Pod-template-hash 不要更改此标签。
更新 Deployment
仅当 Deployment Pod 模板(即 .spec.template)发生改变时,例如模板的标签或容器镜像被更新, 才会触发 Deployment 上线。其他更新(如对 Deployment 执行扩缩容的操作)不会触发上线动作。
更新方式
- kubectl set
- kubectl edit
- kubectl apply -f nginx-deployment.yaml
使用 --record 记录更新信息
[root@master01 deployment]# kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record
deployment.apps/nginx-deployment image updated
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
[root@master01 deployment]# kubectl set image deployment nginx-deployment nginx=nginx:1.16.1
deployment.apps/nginx-deployment image updated
[root@master01 ~]# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out
[root@master01 deployment]# kubectl edit deployment nginx-deployment
Edit cancelled, no changes made.
[root@master01 deployment]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment configured
[root@master01 deployment]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-559d658b74 0 0 0 4m17s
nginx-deployment-5787596d54 3 3 2 43s
nginx-deployment-66b6c48dd5 1 1 1 18m
[root@master01 ~]# kubectl rollout status deployment nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out
[root@master01 deployment]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 2d20h
nginx-deployment-5787596d54-bxjd7 1/1 Running 0 74s
nginx-deployment-5787596d54-qf8mv 1/1 Running 0 55s
nginx-deployment-5787596d54-sc5n9 1/1 Running 0 90s
特点
- Deployment 可确保在更新时仅关闭一定数量的 Pod。默认情况下,它确保至少所需 Pod 的 75% 处于运行状态(最大不可用比例为 25%)。
- Deployment 还确保仅所创建 Pod 数量只可能比期望 Pod 数高一点点。 默认情况下,它可确保启动的 Pod 个数比期望个数最多多出 125%(最大峰值 25%)
- 仔细查看上述 Deployment ,将看到它首先创建了一个新的 Pod,然后删除旧的 Pod, 并创建了新的 Pod。它不会杀死旧 Pod,直到有足够数量的新 Pod 已经出现。 在足够数量的旧 Pod 被杀死前并没有创建新 Pod。它确保至少 3 个 Pod 可用, 同时最多总共 4 个 Pod 可用。 当 Deployment 设置为 4 个副本时,Pod 的个数会介于 3 和 5 之间。
Deployment 的更多信息
[root@master01 deployment]# kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 01 Nov 2022 21:55:08 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 4
Selector: app=nginx
Replicas: desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.15.2
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-5787596d54 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 20m deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 3
Normal ScalingReplicaSet 7m2s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 1
Normal ScalingReplicaSet 6m43s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 2
Normal ScalingReplicaSet 6m29s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 1
Normal ScalingReplicaSet 6m29s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 3
Normal ScalingReplicaSet 4m6s deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 1
Normal ScalingReplicaSet 4m4s deployment-controller Scaled down replica set nginx-deployment-559d658b74 to 2
Normal ScalingReplicaSet 4m4s deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 2
Normal ScalingReplicaSet 3m12s (x2 over 6m43s) deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 2
Normal ScalingReplicaSet 2m53s (x7 over 4m3s) deployment-controller (combined from similar events): Scaled up replica set nginx-deployment-5787596d54 to 3
Normal ScalingReplicaSet 2m41s (x2 over 6m15s) deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 0
Rollover
Deployment更新时会创建新的ReplicaSet以启动新的Pod,最终新的 ReplicaSet 缩放为 .spec.replicas 个副本, 所有旧 ReplicaSets 缩放为 0 个副本。
但如果在更新过程中再次更新,则会立即停止本次更新,直接进行新的更新。
例如,假定你在创建一个 Deployment 以生成 nginx:1.14.2 的 5 个副本,但接下来 更新 Deployment 以创建 5 个 nginx:1.16.1 的副本,而此时只有 3 个 nginx:1.14.2 副本已创建。在这种情况下,Deployment 会立即开始杀死 3 个 nginx:1.14.2 Pod, 并开始创建 nginx:1.16.1 Pod。它不会等待 nginx:1.14.2 的 5 个副本都创建完成后才开始执行变更动作
回滚 Deployment
当 Deployment 不稳定时(例如进入反复崩溃状态)。 默认情况下,Deployment 的所有上线记录都保留在系统中,以便可以随时回滚
history
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7 kubectl set image deployment nginx-deployment nginx=nginx:1.15.2 --record=true
CHANGE-CAUSE 的内容是从 Deployment 的 kubernetes.io/change-cause 注解复制过来的
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7 kubectl set image deployment nginx-deployment nginx=nginx:1.15.2 --record=true
# 通过以下方式设置 CHANGE-CAUSE 消息
[root@master01 deployment]# kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="test annotate"
deployment.apps/nginx-deployment annotated
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
6 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
7 test annotate
要查看修订历史的详细信息
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment --revision=7
deployment.apps/nginx-deployment with revision #7
Pod Template:
Labels: app=nginx
pod-template-hash=5787596d54
Annotations: kubernetes.io/change-cause: test annotate
Containers:
nginx:
Image: nginx:1.15.2
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
回滚到之前的修订版本
# 撤消当前上线并回滚到以前的修订版本
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
7 test annotate
8 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
# 使用 --to-revision 来回滚到特定修订版本
[root@master01 deployment]# kubectl rollout undo deployment nginx-deployment --to-revision=7
deployment.apps/nginx-deployment rolled back
[root@master01 deployment]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
5 kubectl set image deployment nginx-deployment nginx=nginx:1.16.1 --record=true
8 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
9 test annotate
# 检查是否回滚成功
[root@master01 deployment]# kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 40m
缩放 Deployment
[root@master01 deployment]# kubectl scale deployment nginx-deployment --replicas=4
deployment.apps/nginx-deployment scaled
You have new mail in /var/spool/mail/root
[root@master01 deployment]# kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 4/4 4 4 41m
[root@master01 deployment]# kubectl scale deployment nginx-deployment --replicas=1
deployment.apps/nginx-deployment scaled
[root@master01 deployment]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 43m
设置自动缩放器
假设集群启用了Pod 的水平自动缩放
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
比例缩放
RollingUpdate 的 Deployment 支持同时运行应用程序的多个版本。 当自动缩放器缩放处于上线进程(仍在进行中或暂停)中的 RollingUpdate Deployment 时, Deployment 控制器会平衡现有的活跃状态的 ReplicaSets(含 Pod 的 ReplicaSets)中的额外副本, 以降低风险。这称为 比例缩放(Proportional Scaling)。
暂停、恢复 Deployment 的上线过程
- kubectl rollout pause deployment nginx-deployment
- kubectl rollout resume deployment nginx-deployment
[root@master01 deployment]# kubectl get rs -w
NAME DESIRED CURRENT READY AGE
nginx-deployment-559d658b74 0 0 0 33m
nginx-deployment-5787596d54 5 5 5 29m
nginx-deployment-66b6c48dd5 0 0 0 47m
nginx-deployment-69cc985499 0 0 0 14m
[root@master01 ~]# kubectl rollout pause deployment nginx-deployment
deployment.apps/nginx-deployment paused
[root@master01 ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
3 <none>
8 kubectl set image deployment nginx-deployment nginx=nginx:1.15.1 --record=true
11 test annotate
12 test annotate
[root@master01 ~]# kubectl rollout resume deployment nginx-deployment
deployment.apps/nginx-deployment resumed
^C[root@master01 deployment]# kubectl get rs -w
NAME DESIRED CURRENT READY AGE
nginx-deployment-559d658b74 0 0 0 38m
nginx-deployment-5787596d54 5 5 5 34m
nginx-deployment-66b6c48dd5 0 0 0 51m
nginx-deployment-69cc985499 0 0 0 19m
nginx-deployment-559d658b74 0 0 0 38m
nginx-deployment-559d658b74 2 0 0 39m
nginx-deployment-5787596d54 4 5 5 35m
nginx-deployment-559d658b74 3 0 0 39m
nginx-deployment-5787596d54 4 5 5 35m
nginx-deployment-5787596d54 4 4 4 35m
nginx-deployment-559d658b74 3 0 0 39m
nginx-deployment-559d658b74 3 2 0 39m
nginx-deployment-559d658b74 3 3 0 39m
nginx-deployment-559d658b74 3 3 1 39m
nginx-deployment-5787596d54 3 4 4 35m
nginx-deployment-559d658b74 4 3 1 39m
nginx-deployment-5787596d54 3 4 4 35m
nginx-deployment-5787596d54 3 3 3 35m
nginx-deployment-559d658b74 4 3 1 39m
nginx-deployment-559d658b74 4 4 1 39m
nginx-deployment-559d658b74 4 4 2 39m
nginx-deployment-5787596d54 2 3 3 35m
nginx-deployment-559d658b74 5 4 2 39m
nginx-deployment-5787596d54 2 3 3 35m
nginx-deployment-559d658b74 5 4 2 39m
nginx-deployment-5787596d54 2 2 2 35m
nginx-deployment-559d658b74 5 5 2 39m
nginx-deployment-559d658b74 5 5 3 39m
nginx-deployment-5787596d54 1 2 2 35m
nginx-deployment-5787596d54 1 2 2 35m
nginx-deployment-5787596d54 1 1 1 35m
nginx-deployment-559d658b74 5 5 4 39m
nginx-deployment-5787596d54 0 1 1 36m
nginx-deployment-5787596d54 0 1 1 36m
nginx-deployment-5787596d54 0 0 0 36m
nginx-deployment-559d658b74 5 5 5 39m
Deployment 状态
Deployment 的生命周期中会有许多状态
进行中(Progressing)
执行下面的任务期间,Kubernetes 标记 Deployment 为进行中
- Deployment 创建新的 ReplicaSet
- Deployment 正在为其最新的 ReplicaSet 扩容
- Deployment 正在为其旧有的 ReplicaSet(s) 缩容
- 新的 Pod 已经就绪或者可用(就绪至少持续了 MinReadySeconds 秒)
完成(Complete)
- 与 Deployment 关联的所有副本都已更新到指定的最新版本,之前请求的所有更新都已完成。
- 与 Deployment 关联的所有副本都可用。
- 未运行 Deployment 的旧副本。
失败(Failed)
造成此情况一些可能因素如
- 配额(Quota)不足
- 就绪探测(Readiness Probe)失败
- 镜像拉取错误
- 权限不足
- 限制范围(Limit Ranges)问题
- 应用程序运行时的配置错误
检测此状况的一种方法是在 Deployment 规约中指定截止时间参数: (.spec.progressDeadlineSeconds)。
如果你暂停了某个 Deployment 上线,Kubernetes 不再根据指定的截止时间检查 Deployment 进展。 你可以在上线过程中间安全地暂停 Deployment 再恢复其执行,这样做不会导致超出最后时限的问题。
对失败 Deployment 的操作
可应用于已完成的 Deployment 的所有操作也适用于失败的 Deployment。 你可以对其执行扩缩容、回滚到以前的修订版本等操作,或者在需要对 Deployment 的 Pod 模板应用多项调整时,将 Deployment 暂停。
清理策略
在 Deployment 中设置 .spec.revisionHistoryLimit 字段以指定保留此 Deployment 的多少个旧有 ReplicaSet。其余的 ReplicaSet 将在后台被垃圾回收。 默认情况下,此值为 10。
Canary Deployment(金丝雀部署)
区分同一组件的不同版本或者不同配置的多个部署。
常见的做法是在 Pod 模板中区分镜像标签,保持新旧版本应用同时运行。 这样,新版本在完全发布之前也可以接收实时的生产流量。
一个供参考 Deployment 文件
[root@master01 ~]# kubectl get deploy nginx-deployment -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "12"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.15.2","name":"nginx","ports":[{"containerPort":80}]}]}}}}
kubernetes.io/change-cause: test annotate
creationTimestamp: "2022-11-01T13:55:08Z"
generation: 18
labels:
app: nginx
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"nginx"}:
.: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":80,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2022-11-01T14:11:53Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubernetes.io/change-cause: {}
manager: kubectl
operation: Update
time: "2022-11-01T14:33:42Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"nginx"}:
f:image: {}
manager: kubectl-set
operation: Update
time: "2022-11-01T14:47:28Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2022-11-01T14:48:32Z"
name: nginx-deployment
namespace: default
resourceVersion: "76861"
uid: daec152b-f201-4e04-8cab-f70e5f1c2b13
spec:
progressDeadlineSeconds: 600
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.16.1
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 5
conditions:
- lastTransitionTime: "2022-11-01T14:40:56Z"
lastUpdateTime: "2022-11-01T14:40:56Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-11-01T14:48:29Z"
lastUpdateTime: "2022-11-01T14:48:32Z"
message: ReplicaSet "nginx-deployment-559d658b74" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 18
readyReplicas: 5
replicas: 5
updatedReplicas: 5
标签:kubectl,master01,Kubernetes,Deployment,nginx,实操,deployment,root
From: https://www.cnblogs.com/arvinhuang/p/16849524.html