首页 > 其他分享 >k8s-deployment:应用生命周期管理流程

k8s-deployment:应用生命周期管理流程

时间:2024-07-22 15:20:21浏览次数:7  
标签:kubectl 生命周期 Running web1 778dcf88b deployment k8s

deployment:应用生命周期管理流程
应用程序 -> 部署 -> 升级 ->回滚 -> 删除

1 部署

deployment###

kubectl apply -f web1-deployp.yaml
kubectl create deployment web --image=nginx:1.16 --replicas=3

#web1-deploy.yaml																							
kind: Deployment																							
metadata:																							
  annotations:																							
apiVersion: apps/v1																							
kind: Deployment																							
metadata:																							
  name: web1																							
  namespace: web1																							
  labels:																							
    app: web1-app																							
    project: web1-pj																							
spec:																							
  replicas: 3																							
  selector:																							
    matchLabels:																							
      app: web1-app																							
  template:																							
    metadata:																							
      labels:																							
        app: web1-app																							
    spec:																							
      containers:																							
      - image: nginx:1.16																							
        name: nginx																							

[root@k8s-master ~]# vi web1-deploy.yaml
[root@k8s-master ~]# kubectl apply -f web1-deploy.yaml
deployment.apps/web1 created
[root@k8s-master ~]# kubectl get pods -n web1
NAME READY STATUS RESTARTS AGE
web1-546f799d9-gjdll 1/1 Running 0 34s
web1-546f799d9-h8s9k 1/1 Running 0 34s
web1-546f799d9-rmlzk 1/1 Running 0 34s

service###

[root@k8s-master ~]# vi web1-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web1
namespace: web1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web1-app
type: NodePort

[root@k8s-master ~]# kubectl apply -f web1-service.yaml
service/web1 created
[root@k8s-master ~]# kubectl get pods,svc -n web1
NAME READY STATUS RESTARTS AGE
pod/web1-546f799d9-gjdll 1/1 Running 0 19m
pod/web1-546f799d9-h8s9k 1/1 Running 0 19m
pod/web1-546f799d9-rmlzk 1/1 Running 0 19m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web1 NodePort 10.97.204.112 80:31065/TCP 13m

[root@k8s-master ~]# kubectl logs web1-546f799d9-gjdll -n web1
[root@k8s-master ~]# kubectl logs web1-546f799d9-h8s9k -n web1
[root@k8s-master ~]# kubectl logs web1-546f799d9-rmlzk -n web1
10.5.0.5 - - [20/Feb/2022:08:01:57 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0
; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36" "-"
2022/02/20 08:01:57 [error] 7#7: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such fi
le or directory), client: 10.5.0.5, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "
10.5.0.5:31065", referrer: "http://10.5.0.5:31065/"
10.5.0.5 - - [20/Feb/2022:08:01:57 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://10.5.0.5:31065
/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.475
8.82 Safari/537.36" "-"
10.5.0.5 - - [20/Feb/2022:08:02:13 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0;
Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36" "-"

2 应用升级
更新镜像的三种方式,
kubectl apply -f xxx.yaml
kubectl set image deployment/web nginx=nginx:1.17
kubectl edit deployment/web #系统编辑器

升级前版本号1.16

滚动升级
[root@k8s-master ~]# vi web1-deploy.yaml

spec:
containers:
- image: nginx:1.17

[root@k8s-master ~]# kubectl apply -f web1-deploy.yaml
deployment.apps/web1 configured
[root@k8s-master ~]# kubectl get pods,svc -n web1
NAME READY STATUS RESTARTS AGE
pod/web1-546f799d9-gjdll 1/1 Running 0 83m
pod/web1-546f799d9-h8s9k 1/1 Running 0 83m
pod/web1-778dcf88b-7lgjm 1/1 Running 0 25s
pod/web1-778dcf88b-tj86f 0/1 ContainerCreating 0 11s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web1 NodePort 10.97.204.112 80:31065/TCP 77m
[root@k8s-master ~]# kubectl get pods,svc -n web1
NAME READY STATUS RESTARTS AGE
pod/web1-546f799d9-gjdll 0/1 Terminating 0 83m
pod/web1-546f799d9-h8s9k 0/1 Terminating 0 83m
pod/web1-778dcf88b-7lgjm 1/1 Running 0 36s
pod/web1-778dcf88b-c7fw9 1/1 Running 0 7s
pod/web1-778dcf88b-tj86f 1/1 Running 0 22s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web1 NodePort 10.97.204.112 80:31065/TCP 77m
[root@k8s-master ~]# kubectl get pods,svc -n web1
NAME READY STATUS RESTARTS AGE
pod/web1-778dcf88b-7lgjm 1/1 Running 0 57s
pod/web1-778dcf88b-c7fw9 1/1 Running 0 28s
pod/web1-778dcf88b-tj86f 1/1 Running 0 43s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web1 NodePort 10.97.204.112 80:31065/TCP 77m

[root@k8s-master ~]# kubectl set image deployment web1 nginx=nginx:1.18 -n web1
deployment.apps/web1 image updated

kubectl edit deployment web1 -n web1

滚动升级过程

[root@k8s-master ~]# kubectl get replicaset -n web1																							
NAME             DESIRED   CURRENT   READY   AGE																							
web1-546f799d9   0         0         0       100m																->	nginx:1.16						
web1-778dcf88b   0         0         0       17m																->	nginx:1.17						
web1-69f84f567   3         3         3       8m20s																->	nginx:1.18						

deployment -> replicaset

[root@k8s-master ~]# kubectl describe deploy web1 -n web1																							
Name:                   web1																							
Namespace:              web1																							
CreationTimestamp:      Sun, 20 Feb 2022 07:51:11 +0000																							
Labels:                 app=web1-app																							
                        project=web1-pj																							
Annotations:            deployment.kubernetes.io/revision: 3																							
Selector:               app=web1-app																							
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable																							
StrategyType:           RollingUpdate																							
MinReadySeconds:        0																							
RollingUpdateStrategy:  25% max unavailable, 25% max surge																							
Pod Template:																							
  Labels:  app=web1-app																							
  Containers:																							
   nginx:																							
    Image:        nginx:1.18																							
OldReplicaSets:  <none>																							
NewReplicaSet:   web1-69f84f567 (3/3 replicas created)																							
Events:																							
  Type    Reason             Age                From                   Message																							
  ----    ------             ----               ----                   -------																							
  Normal  ScalingReplicaSet  22m                deployment-controller  Scaled up replica set web1-778dcf88b to 1																							
  Normal  ScalingReplicaSet  21m                deployment-controller  Scaled down replica set web1-546f799d9 to 2																							
  Normal  ScalingReplicaSet  21m                deployment-controller  Scaled up replica set web1-778dcf88b to 2																							
  Normal  ScalingReplicaSet  21m                deployment-controller  Scaled down replica set web1-546f799d9 to 1																							
  Normal  ScalingReplicaSet  21m                deployment-controller  Scaled up replica set web1-778dcf88b to 3																							
  Normal  ScalingReplicaSet  21m                deployment-controller  Scaled down replica set web1-546f799d9 to 0																							
  Normal  ScalingReplicaSet  13m                deployment-controller  Scaled up replica set web1-69f84f567 to 1																							
  Normal  ScalingReplicaSet  13m                deployment-controller  Scaled down replica set web1-778dcf88b to 2																							
  Normal  ScalingReplicaSet  13m                deployment-controller  Scaled up replica set web1-69f84f567 to 2																							
  Normal  ScalingReplicaSet  12m (x3 over 12m)  deployment-controller  (combined from similar events): Scaled down replica set web1-778dcf88b to 0																							

3 水平扩缩容(启动多实例,挺高并发性)
修改yaml里的replicas值,再apply
kubectl scale deployment web1 --replicas=10
*replicas参数控制pod副本数
[root@k8s-master ~]# vi web1-deploy.yaml
spec:
replicas: 10
[root@k8s-master ~]# kubectl apply -f web1-deploy.yaml
deployment.apps/web1 configured
[root@k8s-master ~]# kubectl get pods -n web1
NAME READY STATUS RESTARTS AGE
web1-778dcf88b-2lqvr 1/1 Running 0 22s
web1-778dcf88b-2mglr 1/1 Running 0 21s
web1-778dcf88b-4fgqq 1/1 Running 0 26s
web1-778dcf88b-5d69b 1/1 Running 0 21s
web1-778dcf88b-6jqzk 1/1 Running 0 26s
web1-778dcf88b-7gnnv 1/1 Running 0 23s
web1-778dcf88b-f26ld 1/1 Running 0 26s
web1-778dcf88b-j5mhr 1/1 Running 0 24s
web1-778dcf88b-m78lc 1/1 Running 0 26s
web1-778dcf88b-nzrmc 1/1 Running 0 26s
[root@k8s-master ~]# kubectl scale deployment web1 --replicas=5 -n web1
deployment.apps/web1 scaled
[root@k8s-master ~]# kubectl get pods -n web1
NAME READY STATUS RESTARTS AGE
web1-778dcf88b-2lqvr 0/1 Terminating 0 2m36s
web1-778dcf88b-2mglr 0/1 Terminating 0 2m35s
web1-778dcf88b-4fgqq 1/1 Running 0 2m40s
web1-778dcf88b-5d69b 0/1 Terminating 0 2m35s
web1-778dcf88b-6jqzk 1/1 Running 0 2m40s
web1-778dcf88b-7gnnv 0/1 Terminating 0 2m37s
web1-778dcf88b-f26ld 1/1 Running 0 2m40s
web1-778dcf88b-j5mhr 0/1 Terminating 0 2m38s
web1-778dcf88b-m78lc 1/1 Running 0 2m40s
web1-778dcf88b-nzrmc 1/1 Running 0 2m40s
[root@k8s-master ~]# kubectl get pods -n web1
NAME READY STATUS RESTARTS AGE
web1-778dcf88b-4fgqq 1/1 Running 0 2m51s
web1-778dcf88b-6jqzk 1/1 Running 0 2m51s
web1-778dcf88b-f26ld 1/1 Running 0 2m51s
web1-778dcf88b-m78lc 1/1 Running 0 2m51s
web1-778dcf88b-nzrmc 1/1 Running 0 2m51s

4 回滚(项目升级失败回复到正常版本)
kubectl rollout history deployment/web #查看历史发布版本
kubectl rollout undo deployment/web #回滚上一个版本
kubectl rollout undo deployment/web --to-revision=2 #回滚到历史指定版本

[root@k8s-master ~]# kubectl get rs -n web1																							
NAME             DESIRED   CURRENT   READY   AGE																							
web1-546f799d9   0         0         0       135m																							
web1-69f84f567   0         0         0       44m																							
web1-778dcf88b   5         5         5       52m																							
																							
[root@k8s-master ~]# kubectl rollout history deployment web1 -n web1																							
deployment.apps/web1																							
REVISION  CHANGE-CAUSE																							
1         <none>																							
3         <none>																							
4         <none>																							
																							
[root@k8s-master ~]# kubectl rollout undo deployment web1 -n web1																							
deployment.apps/web1 rolled back																							
[root@k8s-master ~]# kubectl get pods -n web1																							
NAME                   READY   STATUS        RESTARTS   AGE																							
web1-69f84f567-2pflg   1/1     Running       0          9s																							
web1-69f84f567-5kcvt   1/1     Running       0          9s																							
web1-69f84f567-lv8t4   1/1     Running       0          11s																							
web1-69f84f567-xsjlm   1/1     Running       0          11s																							
web1-69f84f567-zqm6m   1/1     Running       0          11s																							
web1-778dcf88b-6jqzk   0/1     Terminating   0          15m																							
web1-778dcf88b-m78lc   0/1     Terminating   0          15m																							
web1-778dcf88b-nzrmc   0/1     Terminating   0          15m																							
[root@k8s-master ~]# kubectl get pods -n web1																							
NAME                   READY   STATUS    RESTARTS   AGE																							
web1-69f84f567-2pflg   1/1     Running   0          15s																							
web1-69f84f567-5kcvt   1/1     Running   0          15s																							
web1-69f84f567-lv8t4   1/1     Running   0          17s																							
web1-69f84f567-xsjlm   1/1     Running   0          17s																							
web1-69f84f567-zqm6m   1/1     Running   0          17s																							
																							
[root@k8s-master ~]# kubectl rollout history deployment web1 -n web1																							
deployment.apps/web1																							
REVISION  CHANGE-CAUSE																							
1         <none>																							
4         <none>																							
5         <none>																							
																							
[root@k8s-master ~]# kubectl get rs -n web1																							
NAME             DESIRED   CURRENT   READY   AGE																							
web1-546f799d9   0         0         0       140m																							
web1-69f84f567   5         5         5       48m																							
web1-778dcf88b   0         0         0       57m																							

5 删除
kubectl delete deployment/web
kubectl delete svc/web

*kubeclt deletel pod 删除pod并没有用,是由控制器管理的,控制器还会再启起来新的pod																							
	[root@k8s-master ~]# kubectl delete pod web1-69f84f567-2pflg -n web1																						
	pod "web1-69f84f567-2pflg" deleted																						
																							
	[root@k8s-master ~]# kubectl get pods -n web1																						
	NAME                   READY   STATUS    RESTARTS   AGE																						
	web1-69f84f567-5kcvt   1/1     Running   0          19m																						
	web1-69f84f567-bp6sr   1/1     Running   0          16s																						
	web1-69f84f567-lv8t4   1/1     Running   0          19m																						
	web1-69f84f567-xsjlm   1/1     Running   0          19m																						
	web1-69f84f567-zqm6m   1/1     Running   0          19m																						
	[root@k8s-master ~]#																						

滚动升级与回滚实现机制

标签:kubectl,生命周期,Running,web1,778dcf88b,deployment,k8s
From: https://www.cnblogs.com/z20240722/p/18316061

相关文章

  • Go语言中使用K8s API及一些常用API整理
    Go语言中使用K8sAPI及一些常用API整理发布于 2022-05-0915:54:402K0举报文章被收录于专栏:devops_k8sGoClient在进入代码之前,理解k8s的goclient项目是对我们又帮助的。它是k8sclient中最古老的一个,因此具有很多特性。Client-go没有使用Swagger......
  • SpringBoot原理解析(二)- Spring Bean的生命周期以及后处理器和回调接口
    SpringBoot原理解析(二)-SpringBean的生命周期以及后处理器和回调接口文章目录SpringBoot原理解析(二)-SpringBean的生命周期以及后处理器和回调接口1.Bean的实例化阶段1.1.Bean实例化的基本流程1.2.Bean实例化图例1.3.实例化阶段的后处理器1.3.1.实例化阶段后处理器......
  • 在K8S中,集群节点宕机,可能由哪些原因造成?
    在Kubernetes(K8S)环境中,集群节点的宕机可能由多种原因造成。以下是一些常见的原因及其详细解释:内存溢出问题内核OOM-killer触发:当系统内存不足时,内核会启动OOM-killer来强制结束一些进程以释放内存。这种情况通常发生在没有开启swap或者swap空间不足的系统上[1]。cgroup内存泄......
  • 在K8S中,优先优选哪个CNI插件?为何使用该插件?
    在Kubernetes(K8s)中,选择哪个CNI(ContainerNetworkInterface)插件并没有绝对的“最优”选择,因为不同的插件适用于不同的场景和需求。以下是一些常见的CNI插件及其特点,以及选择它们时可能考虑的因素:1.Flannel特点:最常用的K8s网络插件之一。使用虚拟网络技术(如VXLAN、UDP和Host-......
  • 在K8S中,Pod占用内存和cpu较高,该如何解决?
    在Kubernetes(K8s)中,当Pod占用内存和CPU较高时,可以通过一系列步骤来诊断并解决问题,以确保集群的稳定性和性能。以下是详细的解决步骤:1.监控和诊断使用kubectl命令:使用kubectltoppods命令查看集群中各个Pod的CPU和内存占用情况,找出占用资源高的Pod。使用kubectldescribepod......
  • 在K8S中,如何把Pod调度到某个节点,有哪些方法?
    在Kubernetes(K8s)中,将Pod调度到某个节点可以通过多种方法实现。这些方法主要依赖于节点的标签(Label)、Pod的调度约束(如nodeSelector、nodeName、亲和性/反亲和性规则等)以及调度器的配置。以下是几种主要的方法:1.通过nodeSelector匹配节点这是最简单直接的方式之一,通过在Pod的定义......
  • 在K8S中,ELK是如何实现及如何优化的ES?
    ELK栈(Elasticsearch、Logstash、Kibana)在Kubernetes(K8S)环境中是用于日志收集、分析和可视化的强大工具组合。其中,Elasticsearch作为核心存储和搜索引擎,承担着存储大量日志数据和提供高效搜索的能力。以下是如何在K8S中实现及优化Elasticsearch的详细说明:1.实现Elasticsearchin......
  • android Activity生命周期
    (1)activity启动政策:activity启动行为由相应应用的 AndroidManifest.xml 文件中的启动模式、intent标志以及调用方提供的ActivityOptions定义。使用 ActivityOption#setLaunchDisplayId(int) 可将特定屏幕指定为activity启动的目标。默认情况下,activity与调用方在......
  • Ubuntu(arm)部署k8s(kubernetes)集群
    前言:    k8s集群是目前高端运维需要掌握的必备技能之一,工作中你可以不用k8s,但是简历你不能没有k8s;面试造火箭,工作打螺丝;话不多说,直接上操作,这里就不过多阐述k8s的原理和作用了。部署前工作机器设备:MacBookProm1pro虚拟系统:Ubuntu22.04.3LTSDocker:v24.0.7-......
  • Spring Bean的生命周期函数
    ......