首页 > 其他分享 >k8s更新证书

k8s更新证书

时间:2024-04-17 16:46:23浏览次数:30  
标签:证书 29 更新 kubeconfig kubelet master conf kubeadm k8s

环境

操作系统:centos7.9.2009
集群架构:三个节点,一主两从,k8s版本v1.21.5,kubesphere安装的集群,应该算是kubeadm部署的集群
ip:192.168.106.130,192.168.106.131,192.168.106.132
集群状态:3个节点证书过期,全都挂掉
这是我2022年在虚拟机装的集群,现在时间是2024年3月29日

报错信息

[root@master ~]# kubectl get nodes
The connection to the server 192.168.123.130:6443 was refused - did you specify the right host or port?

查看系统日志less /var/log/messages

Mar 29 11:27:25 master systemd: Started Kubernetes systemd probe.
Mar 29 11:27:25 master kubelet: I0329 11:27:25.394468 5881 server.go:440] "Kubelet version" kubeletVersion="v1.21.5"
Mar 29 11:27:25 master kubelet: I0329 11:27:25.394653 5881 server.go:851] "Client rotation is on, will bootstrap in background"
Mar 29 11:27:25 master kubelet: E0329 11:27:25.395672 5881 bootstrap.go:265] part of the existing bootstrap client certificate in /etc/kubernetes/kube
let.conf is expired: 2023-08-28 17:29:43 +0000 UTC
Mar 29 11:27:25 master kubelet: E0329 11:27:25.395709 5881 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: unable to load bootstrap
kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
Mar 29 11:27:25 master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Mar 29 11:27:25 master systemd: Unit kubelet.service entered failed state.
Mar 29 11:27:25 master systemd: kubelet.service failed.

证书已经在2023-08-28过期
查看证书时间确认一下 kubeadm certs check-expiration

证书更新

  1. 先更新证书以启动kubelet
    kubeadm certs renew all
    输出如下
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
 
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
 
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
  1. 以上输出最后提示需要restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd,,在此之前先看看证书是否更新kubeadm certs check-expiration
[root@master ~]# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0329 17:20:26.792993   60459 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [16
9.254.25.10]

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 29, 2025 08:39 UTC   364d                                    no
apiserver                  Mar 29, 2025 08:37 UTC   364d            ca                      no
apiserver-kubelet-client   Mar 29, 2025 08:37 UTC   364d            ca                      no
controller-manager.conf    Mar 29, 2025 08:39 UTC   364d                                    no
front-proxy-client         Mar 29, 2025 08:37 UTC   364d            front-proxy-ca          no
scheduler.conf             Mar 29, 2025 08:39 UTC   364d                                    no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Aug 25, 2032 17:29 UTC   8y              no
front-proxy-ca          Aug 25, 2032 17:29 UTC   8y              no
  1. 可以看到已经更新了,但kubelet的配置文件等等这些还是使用的旧证书,因此,此时的kubelet等服务还是不能启动的状态,因此,这个时候需要删除这些服务的配置文件,使用kubeadm重新生成这些文件
    rm -rf /etc/kubernetes/*.conf
    kubeadm init phase kubeconfig all
root@master:~# rm -rf /etc/kubernetes/*.conf
root@master:~# kubeadm  init phase kubeconfig all
I1212 23:35:49.775848   19629 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
  1. 然后就可以重启kubelet了,systemctl restart kubelet
[root@master ~]# systemctl restart kubelet
[root@master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 五 2024-03-29 17:24:56 CST; 4s ago
     Docs: http://kubernetes.io/docs/
 Main PID: 884 (kubelet)
    Tasks: 13
   Memory: 34.8M
   CGroup: /system.slice/kubelet.service
           └─884 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --conf...
  1. 更新admin配置,将新生成的 admin.conf 文件拷贝,替换 ~/.kube 目录下的 config 文件
    cp /etc/kubernetes/admin.conf ~/.kube/config
  2. 查看节点状态kubectl get nodes

此时master节点就更新完成了,接下来就是工作节点

---

工作节点

解决方案为:由于整个集群是kubeadm搭建的,而etcd是静态pod 形式存在在master节点的,因此,master节点恢复后,确认etcd正常后,工作节点重新加入集群即可

  1. 删除工作节点 kubectl delete nodes 工作节点
root@master:~# kubectl delete nodes worker1 
node "worker1" deleted
root@master:~# kubectl delete nodes worker2
node "worker2" deleted
  1. 生成加入的命令 kubeadm token create --print-join-command
root@master:~# kubeadm token create --print-join-command
kubeadm join 192.168.106.130:6443 --token 6vzr7y.mtrs8arvtt6xo6lg --discovery-token-ca-cert-hash sha256:3c816d1b3c2c8a54087876a31e2936b
6b5cc247c0328feb12098e939cfea7467
  1. 在工作节点重设节点kubeadm reset -f(131和132节点操作)
[root@worker1 ~]# kubeadm reset -f
[preflight] Running pre-flight checks
W0329 16:49:24.099865   23892 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manage
r.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
  1. 重新加入集群kubeadm join 192.168.106.130:6443 --token 6vzr7y.mtrs8arvtt6xo6lg --discovery-token-ca-cert-hash sha256:3c816d1b3c2c8a54087876a31e2936b 6b5cc247c0328feb12098e939cfea7467
[root@worker1 ~]# kubeadm join 192.168.106.130:6443 --token 6vzr7y.mtrs8arvtt6xo6lg --discovery-token-ca-cert-hash sha256:3c816d1b3c2c8a54087876a31e2936b
6b5cc247c0328feb12098e939cfea7467
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0329 16:51:16.036841   23959 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [16
9.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  1. 此时,回到master节点,查看节点状态,可以看到恢复正常了kubectl get nodes
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   578d    v1.21.5
worker1   Ready    <none>                 3m27s   v1.21.5
worker2   Ready    <none>                 3m21s   v1.21.5

标签:证书,29,更新,kubeconfig,kubelet,master,conf,kubeadm,k8s
From: https://www.cnblogs.com/wszzn/p/18141114

相关文章

  • k8s证书
    查看证书过期时间kubeadmcertscheck-expiration更新证书kubeadmcertsrenewall也可以更新单个证书更新apiserver证书kubeadmcertsrenewapiserver更新apiserver-etcd-client证书kubeadmcertsrenewapiserver-etcd-client更新apiserver-kubelet-client证书......
  • k8s安装rabbbitmq
    1.创建rabbitmqpvc文件kind:PersistentVolumeClaimapiVersion:v1metadata:name:rabbitmq-datanamespace:t1-zdblspec:storageClassName:nfs-client#如果使用存储类自行替换,不是按具体情况配置accessModes:-ReadWriteManyresources:requests......
  • k8s快速创建MongoDB
    1.创建MongoDBpvc文件如果不需要持久存储可以忽略kind:PersistentVolumeClaimapiVersion:v1metadata:name:mongodb-datanamespace:t1-zdblspec:storageClassName:nfs-client#这里使用的存储类accessModes:-ReadWriteManyresources:reques......
  • 基于K8s+Docker+Openresty+Lua+SpringCloudAlibaba的高并发秒杀系统——与京东淘宝同
    ​介绍基于K8s+Docker+Openresty+Lua+SpringCloudAlibaba的高并发高性能商品秒杀系统,本系统实测单台(16核32G主频2.2GHz)openresty(nginx)的QPS可高达6w并发,如果您需要应对100w的并发,则需要100w/6w=17台openresty服务器,17台服务器同时接收并处理这100w的并发流量呢?当然是商业......
  • 自建一款现代化的K8s可视化管理系统
    自建一款现代化的K8s可视化管理系统原创 院长技术 院长技术 2024-03-0107:30 北京 3人听过院长简介作者:院长职位:运维开发工程师官网:https://deanit.cn博客:https://blog.deanit.cn擅长:【虚拟化,容器化,自动化运维,CICD,监控,日志,中间件,双机热备,分布式存储,数据库,认......
  • AI换脸:FaceFusion 3.5.0更新,解决老版本无法使用问题!
    好久不见!闲话不多说,今天主要是通知一下FaceFusion两个版本的更新。更新内容如下:V3.5.0帧着色器:使用来自ddcolor和deoldify的先进模型,为黑白或低色彩素材添加色彩。这些模型利用深度学习技术生成逼真且美观的着色效果。改善唇同步效果:从音频中更清晰、更准确地提......
  • C:\Windows\servicing\Packages 是一个存储 Windows 更新程序包的目录。Windows 操
    C:\Windows\servicing目录包含了与Windows维护和更新相关的文件和子目录。让我们逐个解释一下每个子目录和文件的作用:CbsApi.dll和CbsMsg.dll:这两个DLL文件是Windows组件基础服务(CBS)的一部分。CBS是Windows中用于安装、卸载、维护和更新组件的服务。这些D......
  • 禁止chrome自动更新
    1、打开注册表,可以修改win+R,然后输入regedit2、打开HKEY_LOCAL_MACHINE总文件夹下的SOFTWARE子文件夹找到Policies文件3、在该文件夹下进行新增文件,文件夹为新增项,先进行新增Google然后再到该文件夹下新增Update文件夹 4、然后再新增DWORD(32位)值(D),名称修改为UpdateDefau......
  • 第三十一节:批量插入框架[Zack.EFCore.Batch]和EFCore8.x自带的批量删除、更新
    一. 说明1.目标 这里主要测试EFCore8.x版本提供的批量删除和批量更新;以及老杨的框架[Zack.EFCore.Batch] 以SQLServer为载体进行测试。2.准备(1).需要的程序集  必须的程序集:  Microsoft.EntityFrameworkCore.Tools  EF自身的程序集:Microsoft.......
  • k8s之ExternalName使用
    一、简介externalNameService是k8s中一个特殊的service类型,它不需要指定selector去选择哪些pods实例提供服务,而是使用DNSCNAME机制把自己CNAME到你指定的另外一个域名上,你可以提供集群内的名字,比如mysql.db.svc这样的建立在db命名空间内的mysql服务,也可以指定http://mysql.exam......