使用 kubeadm 进行证书管理
1 检查证书是否过期
你可以使用 check-expiration
子命令来检查证书何时过期
kubeadm certs check-expiration
输出类似于以下内容:
[root@k8s-master /]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 17, 2024 07:40 UTC 261d ca no
apiserver Oct 17, 2024 07:40 UTC 261d ca no
apiserver-etcd-client Oct 17, 2024 07:40 UTC 261d etcd-ca no
apiserver-kubelet-client Oct 17, 2024 07:40 UTC 261d ca no
controller-manager.conf Oct 17, 2024 07:40 UTC 261d ca no
etcd-healthcheck-client Oct 17, 2024 07:40 UTC 261d etcd-ca no
etcd-peer Oct 17, 2024 07:40 UTC 261d etcd-ca no
etcd-server Oct 17, 2024 07:40 UTC 261d etcd-ca no
front-proxy-client Oct 17, 2024 07:40 UTC 261d front-proxy-ca no
scheduler.conf Oct 17, 2024 07:40 UTC 261d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 15, 2033 07:40 UTC 9y no
etcd-ca Oct 15, 2033 07:40 UTC 9y no
front-proxy-ca Oct 15, 2033 07:40 UTC 9y no
该命令显示 /etc/kubernetes/pki
文件夹中的客户端证书以及 kubeadm(admin.conf
、controller-manager.conf
和 scheduler.conf
) 使用的 KUBECONFIG 文件中嵌入的客户端证书的到期时间/剩余时间。
另外,kubeadm 会通知用户证书是否由外部管理; 在这种情况下,用户应该小心的手动/使用其他工具来管理证书更新。
警告: kubeadm
不能管理由外部 CA 签名的证书
2 自动更新证书
kubeadm 会在控制面 升级 的时候更新所有证书。
这个功能旨在解决最简单的用例;如果你对此类证书的更新没有特殊要求, 并且定期执行 Kubernetes 版本升级(每次升级之间的间隔时间少于 1 年), 则 kubeadm 将确保你的集群保持最新状态并保持合理的安全性。
3 手动更新证书
你能随时通过 kubeadm certs renew
命令手动更新你的证书。
此命令用 CA(或者 front-proxy-CA )证书和存储在 /etc/kubernetes/pki
中的密钥执行更新。
执行完此命令之后你需要重启控制面 Pods。因为动态证书重载目前还不被所有组件和证书支持,所有这项操作是必须的。 静态 Pods 是被本地 kubelet 而不是 API Server 管理, 所以 kubectl 不能用来删除或重启他们。 要重启静态 Pod 你可以临时将清单文件从 /etc/kubernetes/manifests/
移除并等待 20 秒 (参考 KubeletConfiguration 结构 中的fileCheckFrequency
值)。 如果 Pod 不在清单目录里,kubelet 将会终止它。 在另一个 fileCheckFrequency
周期之后你可以将文件移回去,为了组件可以完成 kubelet 将重新创建 Pod 和证书更新。
警告: 如果你运行了一个 HA 集群,这个命令需要在所有控制面板节点上执行。
说明: certs renew
使用现有的证书作为属性(Common Name、Organization、SAN 等)的权威来源, 而不是 kubeadm-config ConfigMap。强烈建议使它们保持同步。
4 测试
执行下面的命令
kubeadm certs renew all
命令输出是:
[root@k8s-master /]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@k8s-master /]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 29, 2025 06:00 UTC 364d ca no
apiserver Jan 29, 2025 06:00 UTC 364d ca no
apiserver-etcd-client Jan 29, 2025 06:00 UTC 364d etcd-ca no
apiserver-kubelet-client Jan 29, 2025 06:00 UTC 364d ca no
controller-manager.conf Jan 29, 2025 06:00 UTC 364d ca no
etcd-healthcheck-client Jan 29, 2025 06:00 UTC 364d etcd-ca no
etcd-peer Jan 29, 2025 06:00 UTC 364d etcd-ca no
etcd-server Jan 29, 2025 06:00 UTC 364d etcd-ca no
front-proxy-client Jan 29, 2025 06:00 UTC 364d front-proxy-ca no
scheduler.conf Jan 29, 2025 06:00 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 15, 2033 07:40 UTC 9y no
etcd-ca Oct 15, 2033 07:40 UTC 9y no
front-proxy-ca Oct 15, 2033 07:40 UTC 9y no
重启 kube-apiserver, kube-controller-manager, kube-scheduler and etcd
#进入控制面pod配置文件目录
[root@k8s-master secret]# cd /etc/kubernetes/manifests/
[root@k8s-master manifests]# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
# 显示控制面pod
[root@k8s-master manifests]# kubectl get po --all-namespaces -l tier=control-plane
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 66 (4h12m ago) 103d
kube-system kube-apiserver-k8s-master 1/1 Running 72 (22h ago) 103d
kube-system kube-controller-manager-k8s-master 1/1 Running 122 (4h12m ago) 103d
kube-system kube-scheduler-k8s-master 1/1 Running 129 (4h12m ago) 103d
# 把配置文件移除
[root@k8s-master manifests]# mv * ../backup/
[root@k8s-master manifests]# ls
[root@k8s-master manifests]#
# 检查控制面pod
[root@k8s-master manifests]# kubectl get po --all-namespaces -l tier=control-plane
The connection to the server 10.0.0.150:6443 was refused - did you specify the right host or port?
# 把配置文件恢复原位
[root@k8s-master manifests]# cd ../backup/
[root@k8s-master backup]# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
[root@k8s-master backup]# cp * ../manifests/
# 检查控制面pod
[root@k8s-master backup]# kubectl get po --all-namespaces -l tier=control-plane
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 0 103d
kube-system kube-apiserver-k8s-master 1/1 Running 0 103d
kube-system kube-controller-manager-k8s-master 1/1 Running 0 103d
kube-system kube-scheduler-k8s-master 1/1 Running 0 103d
# 检查业务
说明一下:
重启控制面pod这个操作,对正在运行的负载和服务是没有影响的,可以Online进行的。但是,在控制面的pod启动前不能对集群做管理
标签:UTC,k8s,管理,ca,证书,no,etcd,kubeadm,kube From: https://www.cnblogs.com/goldtree358/p/17997002