首页 > 其他分享 >k8s 集群升级

k8s 集群升级

时间:2024-06-06 18:45:13浏览次数:23  
标签:staticpods upgrade kubernetes 升级 集群 kubeadm k8s kube 1.24

1. 升级概要

  • k8s版本:以 x.y.z 表示,其中 x 是主要版本, y 是次要版本,z 是补丁版本,不能跳过次要版本升级,比如1.28.0->1.30.0,补丁版本可以跳跃更新,比如1.28.0->1.28.10
  • 推荐使用与版本匹配的 kubelet 和 kubeadm,最好各组件版本保持一致
  • 升级后,因为容器spec的哈希值已更改,所有容器都会被重新启动
  • 升级过程需要腾空升每个节点,将工作负载迁移

2. 升级过程

2.1 升级主控制平面节点

控制面节点上的升级过程应该每次处理一个节点。 首先选择一个要先行升级的控制面节点。该节点上必须拥有 /etc/kubernetes/admin.conf 文件。本次模拟从1.23.17升级到1.24.15版本,其他版本升级类似。

2.1.1 升级kubeadm
# 查看当前版本信息
$ kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master01   Ready    control-plane,master   148d   v1.23.17
k8s-node01     Ready    <none>                 148d   v1.23.17
k8s-node02     Ready    <none>                 148d   v1.23.17
k8s-node03     Ready    <none>                 148d   v1.23.17

# 查看可升级版本
$ yum list --show-duplicates kubeadm |grep '1.24.'
kubeadm.x86_64                       1.24.0-0                        kubernetes 
kubeadm.x86_64                       1.24.1-0                        kubernetes 
kubeadm.x86_64                       1.24.2-0                        kubernetes 
kubeadm.x86_64                       1.24.3-0                        kubernetes 
kubeadm.x86_64                       1.24.4-0                        kubernetes 
kubeadm.x86_64                       1.24.5-0                        kubernetes 
kubeadm.x86_64                       1.24.6-0                        kubernetes 
kubeadm.x86_64                       1.24.7-0                        kubernetes 
kubeadm.x86_64                       1.24.8-0                        kubernetes 
kubeadm.x86_64                       1.24.9-0                        kubernetes 
kubeadm.x86_64                       1.24.10-0                       kubernetes 
kubeadm.x86_64                       1.24.11-0                       kubernetes 
kubeadm.x86_64                       1.24.12-0                       kubernetes 
kubeadm.x86_64                       1.24.13-0                       kubernetes 
kubeadm.x86_64                       1.24.14-0                       kubernetes 
kubeadm.x86_64                       1.24.15-0                       kubernetes 
kubeadm.x86_64                       1.24.16-0                       kubernetes 
kubeadm.x86_64                       1.24.17-0                       kubernetes

$ yum -y install kubeadm-1.24.15
$ kubeadm version -o yaml
clientVersion:
  buildDate: "2023-06-14T09:54:33Z"
  compiler: gc
  gitCommit: 2c67202dc0bb96a7a837cbfb8d72e1f34dfc2808
  gitTreeState: clean
  gitVersion: v1.24.15
  goVersion: go1.19.10
  major: "1"
  minor: "24"
  platform: linux/amd64

2.1.2 校验升级计划,不能有报错信息

$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.15
W0606 18:11:22.084648  122592 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://cdn.dl.k8s.io/release/stable.txt": dial tcp 146.75.113.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers)
W0606 18:11:22.084821  122592 version.go:105] falling back to the local client version: v1.24.15
[upgrade/versions] Target version: v1.24.15
[upgrade/versions] Latest version in the v1.23 series: v1.23.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     4 x v1.23.17   v1.24.15

Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.23.17   v1.24.15
kube-controller-manager   v1.23.17   v1.24.15
kube-scheduler            v1.23.17   v1.24.15
kube-proxy                v1.23.17   v1.24.15
CoreDNS                   v1.8.6     v1.8.6
etcd                      3.5.6-0    3.5.6-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.24.15

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

2.1.3. 执行升级命令,升级控制面组件

$ kubeadm upgrade apply v1.24.15
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.15"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.15
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y 
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.15" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2379417340"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-06-06-18-17-42/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-06-06-18-17-42/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-06-06-18-17-42/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.15". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

2.1.4. 腾空该节点,将节点标记为不可调度并驱逐所有负载,准备节点的维护

$ kubectl drain k8s-master01 --ignore-daemonsets
node/k8s-master01 cordoned
WARNING: ignoring DaemonSet-managed Pods: default/ds-ng-d6g58, kube-flannel/kube-flannel-ds-qmpmf, kube-system/kube-proxy-tc92t
evicting pod kube-system/coredns-65c54cc984-sjd7x
pod/coredns-65c54cc984-sjd7x evicted
node/k8s-master01 drained

2.1.5. 升级 kubelet 和 kubectl

$ yum -y install kubelet-1.24.15 kubectl-1.24.15
$ systemctl daemon-reload
$ systemctl restart kubelet

2.1.6. 解除节点封锁,通过将节点标记为可调度,让其重新上线

$ kubectl uncordon k8s-master01
node/k8s-master01 uncordoned

2.2 升级其他控制平面节点

对于其他控制面节点,有两步不一样:

  • 不需要执行kubeadm upgrade plan
  • kubeadm upgrade node 替换 kubeadm upgrade apply
    其他流程一样。

标签:staticpods,upgrade,kubernetes,升级,集群,kubeadm,k8s,kube,1.24
From: https://www.cnblogs.com/f66666/p/18235749

相关文章

  • 爱剪辑升级后遇困:mfc140u.dll文件缺失怎么办?
    在享受视频编辑的乐趣时,爱剪辑作为一款广受欢迎的视频编辑软件,其每一次更新都牵动着众多用户的心。然而,近期部分用户在兴奋地升级到最新版本后,却遇到了一个棘手的问题——系统提示缺少“mfc140u.dll”文件,导致软件无法正常启动。这一突如其来的错误让不少视频创作爱好者措手不......
  • k8s - namespace
    简介命名空间,可以根据ns区分业务线、应用、权限一般默认命名空间指向default,可以在kubeconfig中修改默认配置清单文件apiVersion:v1kind:Namespacemetadata:#命名空间名称name:yky常用操作#创建名为yky的nskubectlcreatensyky#删除名为yky......
  • 修改k8s pod的hosts文件
    当我们服务需要使用自定义的域名解析时,就需要修改pod内hosts文件。而如果我们在pod内部修改后,下次重启依然会丢,所有下面用两种方式实现持久化修改: 1.当集群内所有或者大部分服务都需要修改hosts文件时,我们可以修改CoreDNS的configmap文件 kubectleditcm-nkube-systemco......
  • 浅谈Redis的三种集群策略及应用场景
    本文分享自天翼云开发者社区《浅谈Redis的三种集群策略及应用场景》,作者:段林Redis提供了三种集群策略:1.主从模式:这种模式⽐较简单,主库可以读写,并且会和从库进⾏数据同步,这种模式下,客户端直接连主库或某个从库,但是但主库或从库宕机后,客户端需要⼿动修改IP,另外,这种模式也⽐较难进......
  • k8s-pod参数详解
    目录概述创建Pod编写一个简单的Pod添加常用参数为Pod的容器分配资源网络相关Pod健康检查启动探针存活探针就绪探针作用整个Pod参数配置创建docker-registry卷挂载结束概述  k8s中的pod参数详解。官方文档  版本k8s1.27.x、busybox:stable-musl、nginx:sta......
  • 高科技IT企业适合平滑替代FTP升级方案有哪些?
    随着信息技术的飞速发展,传统的文件传输协议FTP已经逐渐不能满足现代企业的需求。特别是对于高科技IT企业来说,他们需要的不仅仅是一个简单的文件传输工具,而是一个能够提供高速、安全、稳定、易管理且兼容性强的解决方案。那么,在这个背景下,有哪些适合高科技IT企业的FTP升级方案呢......
  • k8s 证书过期处理
    问题kubeadm安装的集群默认签发的证书有效期为1年,到期后集群组件之间无法正常通信。证书修复流程1.检查当前证书状态$kubeadmcertscheck-expiration[check-expiration]Readingconfigurationfromthecluster...[check-expiration]FYI:Youcanlookatthisconfig......
  • 界面控件Telerik UI for WPF中文教程 - 用RadSvgImage升级应用程序UI
    TelerikUIforWPF拥有超过100个控件来创建美观、高性能的桌面应用程序,同时还能快速构建企业级办公WPF应用程序。UIforWPF支持MVVM、触摸等,创建的应用程序可靠且结构良好,非常容易维护,其直观的API将无缝地集成VisualStudio工具箱中。TelerikUIforWPF中的RadSvgImage组件使......
  • k8s配置节点亲和性yaml示例:根据节点名称来配置节点亲和性(node affinity)
    在Kubernetes中,根据节点名称来配置节点亲和性(nodeaffinity)通常不是直接通过节点名称实现的,而是通过为节点添加特定的标签,然后在Pod的亲和性规则中匹配这些标签。不过,有一种特殊情况是使用NodeAffinity的nodeSelectorTerms中的matchExpressions,通过设置operator为In并使用......
  • k8s - 二进制部署[阿里云]
    概述部署前先了解一下k8s需要的组件和所处的位置,并且为了保证安全,k8s各组件之间通信都需要信任,这就引出了k8s从入门到放弃的证书部署步骤我这次是在阿里云上部署,所以包括了一些云产品云产品介绍eip动态公网ip,和nat网关绑定后,内网服务器才可以访问公网nat网关,所有k8s......