首页 > 其他分享 >解决kubeadm部署的k8s 集群证书过期

解决kubeadm部署的k8s 集群证书过期

时间:2024-02-08 11:11:06浏览次数:41  
标签:02 UTC kube no 过期 Mar kubeadm k8s

目录

 

K8S CA证书是10年,但是组件证书的日期只有1年,为了证书一直可用状态需要更新,目前主流的一共有3种:

1、版本升级,只要升级就会让各个证书延期1年,官方设置1年有效期的目的就是希望用户在一年内能升级1次;
2、通过命令续期 (这种只能延长一年);
3、编译源码Kubeadm,证书有效期可自定义;

本实验环境是单master集群环境,如果是多master集群环境那么需要将master上更新的证书分发到各个节点上!

此文档采用K8s 1.18.3版本,不保证其他版本也适用,建议自行测试。

一、查看证书过期时间

1.1 方式一

  $ kubeadm alpha certs check-expiration

个人搭建的集群不知道为什么,使用上述命令无法查看ca的证书有效期,所以记录上方式二!

1.2 方式二

  $ for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;
  do openssl x509 -in $item -text -noout| grep Not;
  echo ======================$item===============;
  done

也可以一个一个的进行查看:

  $ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '

二、通过命令续期

2.1 修改集群内所有机器的时间,模拟证书在过期的边缘

  $ date -s "2022-3-1 12:00"

2.2 查看证书有效期

为了更直观的看到证书的有效期!

  $ kubeadm alpha certs check-expiration
  [check-expiration] Reading configuration from the cluster...
  [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
   
  CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  admin.conf Mar 03, 2022 16:02 UTC 2d no
  apiserver Mar 03, 2022 16:02 UTC 2d ca no
  apiserver-etcd-client Mar 03, 2022 16:02 UTC 2d etcd-ca no
  apiserver-kubelet-client Mar 03, 2022 16:02 UTC 2d ca no
  controller-manager.conf Mar 03, 2022 16:02 UTC 2d no
  etcd-healthcheck-client Mar 03, 2022 16:02 UTC 2d etcd-ca no
  etcd-peer Mar 03, 2022 16:02 UTC 2d etcd-ca no
  etcd-server Mar 03, 2022 16:02 UTC 2d etcd-ca no
  front-proxy-client Mar 03, 2022 16:02 UTC 2d front-proxy-ca no
  scheduler.conf Mar 03, 2022 16:02 UTC 2d no
   
  CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  ca Mar 01, 2031 16:02 UTC 9y no
  etcd-ca Mar 01, 2031 16:02 UTC 9y no
  front-proxy-ca Mar 01, 2031 16:02 UTC 9y no

如果证书过期的话,就会出现以下情况:

  $ kubectl get pod -n kube-system
  Unable to connect to the server: x509: certificate has expired or is not yet valid

2.3 备份原有数据

  $ kubeadm config view > /root/kubeadm.yaml
  $ cat /root/kubeadm.yaml
  apiServer:
  extraArgs:
  authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
  apiVersion: kubeadm.k8s.io/v1beta2
  certificatesDir: /etc/kubernetes/pki
  clusterName: kubernetes
  controllerManager: {}
  dns:
  type: CoreDNS
  etcd:
  local:
  dataDir: /var/lib/etcd
  imageRepository: registry.aliyuncs.com/google_containers
  kind: ClusterConfiguration
  kubernetesVersion: v1.18.3
  networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
  scheduler: {}

2.4 备份证书

备份主要就是为了升级失败之后,便于回滚!

  $ cp -rp /etc/kubernetes /etc/kubernetes_$(date +%F)
  $ ls /etc/kubernetes_2022-03-01/
  admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf

2.5 更新证书

  $ kubeadm alpha certs renew all --config=/root/kubeadm.yaml

2.6 确认证书有效期

  $ kubeadm alpha certs check-expiration
  CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  admin.conf Mar 01, 2023 04:02 UTC 364d no
  apiserver Mar 01, 2023 04:02 UTC 364d no
  apiserver-etcd-client Mar 01, 2023 04:02 UTC 364d no
  apiserver-kubelet-client Mar 01, 2023 04:02 UTC 364d no
  controller-manager.conf Mar 01, 2023 04:02 UTC 364d no
  etcd-healthcheck-client Mar 01, 2023 04:02 UTC 364d no
  etcd-peer Mar 01, 2023 04:02 UTC 364d no
  etcd-server Mar 01, 2023 04:02 UTC 364d no
  front-proxy-client Mar 01, 2023 04:02 UTC 364d no
  scheduler.conf Mar 01, 2023 04:02 UTC 364d no

2.7 更新kubeconfig文件

  $ rm -f /etc/kubernetes/*.conf
  $ kubeadm init phase kubeconfig all --config /root/kubeadm.yaml

2.8 更新客户端证书

  $ cp $HOME/.kube/config{,.default}
  $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  $ chown $(id -u):$(id -g) $HOME/.kube/config

2.9 重启相关的pod

  $ docker ps |egrep "k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd" | awk '{print $1}' | xargs docker rm -f

也可以干脆一点,直接重启docker即可!

2.10 查看pod是否运行正常

  $ kubectl get pod -A
  NAMESPACE NAME READY STATUS RESTARTS AGE
  kube-system coredns-58cc8c89f4-8lq2k 1/1 Running 1 363d
  kube-system coredns-58cc8c89f4-hz774 1/1 Running 1 363d
  kube-system etcd-k8s-master 1/1 Running 1 363d
  kube-system kube-apiserver-k8s-master 1/1 Running 1 363d
  kube-system kube-controller-manager-k8s-master 1/1 Running 1 363d
  kube-system kube-flannel-ds-amd64-fh9nx 1/1 Running 1 363d
  kube-system kube-flannel-ds-amd64-gmjth 1/1 Running 1 363d
  kube-system kube-flannel-ds-amd64-mvtdg 1/1 Running 1 363d
  kube-system kube-proxy-8dtfw 1/1 Running 1 363d
  kube-system kube-proxy-9xwgb 1/1 Running 1 363d
  kube-system kube-proxy-zcdvn 1/1 Running 1 363d
  kube-system kube-scheduler-k8s-master 1/1 Running 1 363d

2.11 更新节点上kubelet证书有效期

  $ cp /etc/kubernetes/kubelet.conf{,.default}
  #kubeadm init phase kubeconfig kubelet --node-name <节点名称> --kubeconfig-dir /tmp/ --apiserver-advertise-address <集群VIP>,例如:
  $ kubeadm init phase kubeconfig kubelet --node-name k8s-master --kubeconfig-dir /tmp/ --apiserver-advertise-address 10.4.7.10
  $ \cp /tmp/kubelet.conf /etc/kubernetes/
  $ systemctl restart kubelet

kubelet 的配置文件master节点可以和node节点共用!

三、编译源码kubeadm,证书时间自定义

3.1 备份集群配置

  $ kubeadm config view > kubeadm-cluster.yaml # 备份
  $ kubeadm version
  kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.18.3", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:15:39Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
  # 我这里的版本是1.18.3

3.2 获取对应的kubeadm源码

  $ wget https://github.com/kubernetes/kubernetes/archive/v1.18.3.tar.gz
  $ tar zxvf v1.18.3.tar.gz

3.3 修改CA证书有效期

  $ vim kubernetes-1.18.3/staging/src/k8s.io/client-go/util/cert/cert.go
  65 NotBefore: now.UTC(),
  66 NotAfter: now.Add(duration365d * 100).UTC(), # 默认是10,改成100
  67 KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
  68 BasicConstraintsValid: true,
  69 IsCA: true,

3.4 修改其他证书有效期

  $ vim kubernetes-1.18.3/cmd/kubeadm/app/constants/constants.go
  # 跳转至46行,修改如下(追加 * 100):
  46 CertificateValidity = time.Hour * 24 * 365 * 100

3.5 安装go环境进行编译

  $ wget https://dl.google.com/go/go1.13.9.linux-amd64.tar.gz
  $ tar zxf go1.13.9.linux-amd64.tar.gz -C /usr/local/
  $ echo 'export PATH=/usr/local/go/bin:$PATH' >> /etc/profile
  $ source /etc/profile
  $ go version
  go version go1.13.9 linux/amd64

3.6 go设置国内代理

Golang V1.13之后支持通过设置变量GOPROXY来修改代理地址,默认的代理服务器,https://proxy.golang.org在国内访问经常出现timeout!详见:https://github.com/goproxy/goproxy.cn/blob/master/README.zh-CN.md
在终端执行即可!

  $ go env -w GOPROXY=https://goproxy.cn,direct
  $ go env -w GOSUMDB="sum.golang.google.cn"

3.7 编译kubeadm

  $ cd kubernetes-1.18.3/ # 进入kubeadm源码目录
  $ make all WHAT=cmd/kubeadm GOFLAGS=-v

3.8 替换kubeadm指令

  $ cp /usr/bin/kubeadm{,.bak}
  $ \cp _output/local/bin/linux/amd64/kubeadm /usr/bin

3.9 更新集群证书

  $ kubeadm config view > kubeadm-cluster.yaml
  # 如果有多个master节点,请将 kubeadm-cluster.yaml 文件和编译后的kubeadm指令发送至其他master节点
   
  # 更新证书(若有多个master,则需要在所有master上执行)
  $ kubeadm alpha certs renew all --config=kubeadm-cluster.yaml
  W0904 07:23:15.938694 59308 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
  certificate for serving the Kubernetes API renewed
  certificate the apiserver uses to access etcd renewed
  certificate for the API server to connect to kubelet renewed
  certificate embedded in the kubeconfig file for the controller manager to use renewed
  certificate for liveness probes to healthcheck etcd renewed
  certificate for etcd nodes to communicate with each other renewed
  certificate for serving etcd renewed
  certificate for the front proxy client renewed
  certificate embedded in the kubeconfig file for the scheduler manager to use renewed

3.10 更新kubeconfig文件

  $ rm -f /etc/kubernetes/*.conf
  $ kubeadm init phase kubeconfig all --config kubeadm-cluster.yaml
  W0904 07:25:41.882636 61426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
  [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  [kubeconfig] Writing "admin.conf" kubeconfig file
  [kubeconfig] Writing "kubelet.conf" kubeconfig file
  [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  [kubeconfig] Writing "scheduler.conf" kubeconfig file

3.11 重启相关pod

在所有Master上执行重启kube-apiserver、kube-controller、kube-scheduler、etcd这4个容器,以便使证书生效。

  $ docker ps |egrep "k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd" | awk '{print $1}' | xargs docker restart

3.12 替换admin文件

  $ cp ~/.kube/config{,.old}
  $ \cp -i /etc/kubernetes/admin.conf ~/.kube/config
  $ chown $(id -u):$(id -g) ~/.kube/config

3.13 确认指令正常

  $ kubectl get pod -A
  NAMESPACE NAME READY STATUS RESTARTS AGE
  kube-system calico-kube-controllers-5b8b769fcd-cpls6 1/1 Running 0 13h
  kube-system calico-node-2hk5w 1/1 Running 0 13h
  kube-system calico-node-bwmmk 1/1 Running 0 13h
  kube-system calico-node-gvldn 1/1 Running 0 13h
  kube-system coredns-546565776c-g7j2f 1/1 Running 0 13h
  kube-system coredns-546565776c-wtxt4 1/1 Running 0 13h
  kube-system etcd-k8s-master 1/1 Running 0 13h
  kube-system kube-apiserver-k8s-master 1/1 Running 0 13h
  kube-system kube-controller-manager-k8s-master 1/1 Running 1 13h
  kube-system kube-proxy-bwkv6 1/1 Running 0 13h
  kube-system kube-proxy-jdzps 1/1 Running 0 13h
  kube-system kube-proxy-xjpxf 1/1 Running 0 13h
  kube-system kube-scheduler-k8s-master 1/1 Running 0 13h
  kube-system kuboard-7986796cf8-mk66v 1/1 Running 0 12h
  kube-system metrics-server-7f96bbcc66-qldnm 1/1 Running 0 12h

3.14 确认证书更新成功

  $ kubeadm alpha certs check-expiration
  [check-expiration] Reading configuration from the cluster...
  [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
   
  CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
  admin.conf Mar 03, 2121 16:02 UTC 2d no
  apiserver Mar 03, 2121 16:02 UTC 2d ca no
  apiserver-etcd-client Mar 03, 2121 16:02 UTC 2d etcd-ca no
  apiserver-kubelet-client Mar 03, 2121 16:02 UTC 2d ca no
  controller-manager.conf Mar 03, 2121 16:02 UTC 2d no
  etcd-healthcheck-client Mar 03, 2121 16:02 UTC 2d etcd-ca no
  etcd-peer Mar 03, 2121 16:02 UTC 2d etcd-ca no
  etcd-server Mar 03, 2121 16:02 UTC 2d etcd-ca no
  front-proxy-client Mar 03, 2121 16:02 UTC 2d front-proxy-ca no
  scheduler.conf Mar 03, 2121 16:02 UTC 2d no
   
  CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
  ca Mar 01, 2031 16:02 UTC 9y no
  etcd-ca Mar 01, 2031 16:02 UTC 9y no
  front-proxy-ca Mar 01, 2031 16:02 UTC 9y no
*************** 当你发现自己的才华撑不起野心时,就请安静下来学习吧!***************

标签:02,UTC,kube,no,过期,Mar,kubeadm,k8s
From: https://www.cnblogs.com/uvwill/p/18011667

相关文章

  • 在K8S中,Pod可能位于的状态有什么?
    在Kubernetes(K8s)中,Pod可能处于以下几种状态:Pending:Pod已经被集群接受,但至少有一个容器镜像尚未创建。这个阶段包括调度Pod到节点的时间、下载容器镜像时间以及等待其他初始化条件满足的过程。ContainerCreating:这是一个过渡状态,表示kubelet正在为Pod创建容器,这包括从镜......
  • 在K8S中,创建一个Pod的主要流程是什么?
    在Kubernetes(简称K8s)中创建一个Pod的主要流程如下:用户请求:用户通过kubectl命令行工具或API接口提交一个Pod的定义,通常是通过YAML或JSON格式的配置文件来描述Pod的详细信息,包括容器镜像、环境变量、资源需求、卷挂载等。APIServer接收入口:用户的请求首先到达Kubernetes......
  • 在K8S中,Pod的重启策略是什么?
    在Kubernetes(K8S)中,Pod的重启策略是通过restartPolicy字段指定的,用于定义当Pod中的容器终止时kubelet应如何处理这些容器。以下是三种主要的重启策略:Always:这是默认的重启策略。如果一个容器终止(无论退出码是什么),kubelet都会自动重启该容器。这意味着只要Pod没有被删除或者节......
  • 在K8S中,Pod的健康检查方式有哪些?
    在Kubernetes(K8s)中,Pod的健康检查主要通过两种类型的探针实现:LivenessProbe和ReadinessProbe。LivenessProbe(存活探针):用于检测容器是否处于正常运行状态。如果LivenessProbe失败,则表明容器已不再健康,并且kubelet会采取相应行动,通常是重启容器。LivenessProbe可......
  • 在k8S中,Pod中的LivenessProbe探针常见方式有哪些?
    在Kubernetes(k8s)中,Pod的LivenessProbe探针常见的方式有以下三种:Exec:通过在容器内部执行一个命令来检查应用是否正常运行。如果命令的退出状态码为0,则认为应用程序是健康的;非0状态码则视为不健康。例如:livenessProbe:exec:command:-cat-/t......
  • 二刷 K8s 源码 - workqueue 的所有细节
    1.概述-何来此文2.Queue的实现2.1Queue.Add(iteminterface{})方法2.2Queue.Get()方法2.3Queue.Done(iteminterface{})方法3.DelayingQueue的实现4.RateLimitingQueue的实现5.rateLimiter限速器的实现6.控制器里用的默认限速器7.总结1.概述-......
  • 使用kubeadm部署kubernetes1.23(学习使用)
    注释:此次操作使用VMwareWorkstationPro17虚拟机进行本次使用单master节点,双worker节点组成的最小化单主节点的学习环境1.K8S所有节点环境准备xshell发送所用会话,包括harbor仓库虚拟机操作系统环境准备参考链接:https://kubernetes.io/zh/docs/setup/production-environmen......
  • Docker-compose至K8S迁移工具kompose
    参考Github:https://github.com/kubernetes/komposekompose工具用于将docker-compose配置文件转换的k8s可识别的yaml文件安装CentOS安装#yuminstallepel-release#yuminstallkompose使用源安装的版本较老可直接下载#Linuxcurl-Lhttps://github.com/kubernetes/......
  • k8s 怎么手动拉取docker镜像?
    k8s怎么手动拉取docker镜像?在Kubernetes(K8s)中管理和部署应用时,手动拉取Docker镜像是一项基本操作。在Kubernetes中,Pod创建时通常会在其配置文件(Deployment,StatefulSet等)中指定需要使用的Docker镜像。但如果你想先手动将镜像拉取到集群节点上,可以按照以下步骤进行:首先,确保......
  • 在K8S中,kube-proxy 作用是什么?
    在Kubernetes(K8s)中,kube-proxy是一个关键的网络组件,它运行在集群中的每个节点上,负责实现服务发现和负载均衡功能。kube-proxy的主要作用包括:服务代理:kube-proxy将Kubernetes的服务抽象(Service)转换为实际的网络路由规则,使得Pod可以相互通信,并且外部流量可以正确地流入到服务背后......