摘要
Kubernetes运行过程中出现的各种问题,因此本人整理出本人遇到的有关于的k8s的相关问题和解决方案
一、k8s重启报错 :The connection to the server 192.168.102.149:6443 was refused
1.1 现象
k8s重启报错
# kubectl get pods
The connection to the server xxx:6443 was refused - did you specify the right host or port?
1.2 问题排查
根据报错描述,连接kubelet的6443端口被拒绝:
查看该端口状态
显示端口未启动起来:
ss -antulp | grep :6443
该端口是kubelet的api监听端口,应该是kubelet启动失败,尝试重启,查看kubelet状态,果然启动失败,分析日志
systemctl status kubelet
journalctl -xefu kubelet
有可能是部分组件启动失败,查看容器状态,发现组件都没有启动起来,重启docker以及相关容器,报错
[root@master ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56f463b5684b 9b60aca1d818 "kube-controller-man…" 40 hours ago Exited (2) 39 hours ago k8s_kube-controller-manager_kube-controller-manager-master_kube-system_8f99a56fb3eeae0c61283d6071bfb1f4_5
5043f1103f1f aaefbfa906bd "kube-scheduler --au…" 40 hours ago Exited (2) 39 hours ago k8s_kube-scheduler_kube-scheduler-master_kube-system_285062c53852ebaf796eba8548d69e43_5
2d707069ab22 bfe3a36ebd25 "/coredns -conf /etc…" 41 hours ago Exited (0) 39 hours ago k8s_coredns_coredns-6d56c8448f-mt7vz_kube-system_abc65488-0a54-4a1a-8e23-339f3f23f6d2_0
0dadfca20cb7 bfe3a36ebd25 "/coredns -conf /etc…" 41 hours ago Exited (0) 39 hours ago k8s_coredns_coredns-6d56c8448f-hdtlf_kube-system_e1f90d02-77d0-4529-bea5-b4a72cdb4cf5_0
f25051c775cf registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 41 hours ago Exited (0) 39 hours ago k8s_POD_coredns-6d56c8448f-mt7vz_kube-system_abc65488-0a54-4a1a-8e23-339f3f23f6d2_0
b24a10712152 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 41 hours ago Exited (0) 39 hours ago k8s_POD_coredns-6d56c8448f-hdtlf_kube-system_e1f90d02-77d0-4529-bea5-b4a72cdb4cf5_0
fed8e33864c1 e708f4bb69e3 "/opt/bin/flanneld -…" 41 hours ago Exited (137) 39 hours a
[root@master ~]# docker start $(docker ps -a | awk '{ print $1}' | tail -n +2)
Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
1.3 解决方案
根据错误描述,是docker配置文件配置驱动配置错误造成的,可以直接注释掉,重启docker,重启kubelet(不要手动重启容器,因为容器之间有启动顺序,如果自己不清楚,不建议手动重启)
二、kubectl 命令执行报错:(Unable to connect to the server: x509: certificate signed by unknown authority )
2.1 现象
kubectl get nodes执行报错:
kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
2.2 问题原因
/root/.kube/config证书认证错误,使用正确证书,可能是kubeadm reset证书未删除
2.3 解决方案
删除原来的证书及缓存文件
#rm -rf /root/.kube/*
拷贝master节点上证书到目录即可
k8s单节点部署(master ,node部分) - 知己一语 - 博客园
三、k8s集群替换runc:docker->containerd
3.1 基于kubeadm安装的kubelet解决方案
# 使用kubeadm查看默认配置:
kubeadm config print init-defaults --component-configs KubeletConfiguration
如果要将运行时从默认的docker切换到containerd,那么需要修改文件:
vim /var/lib/kubelet/kubeadm-flags.env
在KUBELET_KUBEADM_ARGS中添加以下参数:
--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
实例
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
3.2 基于直接使用可执行文件部署的kubelet解决方案
修改/usr/lib/systemd/system/kubelet.service 文件,添加启动参数:
--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
#systemctl daemon-reload && systemctl restart kubelet
kubeadm使用了drop-in的方式管理kubelet服务,此修改kubelet启动参数,直接修改/usr/lib/systemd/system/kubelet.service 文件将不起作用,
四、coredns访问证书错误
4.1 现象
kubectl describe pod -n kube-system coredns-757569d647-qj8ts
Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "b7ea16c5b21e06069d1418b322e04bd2da482acdf21f863f47c96a80c551eab5" network for pod "coredns-757569d647-qj8ts": networkPlugin cni failed to set up pod "coredns-757569d647-qj8ts_kube-system" network: error getting ClusterInformation: Get https://[10.31.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), failed to clean up sandbox container "b7ea16c5b21e06069d1418b322e04bd2da482acdf21f863f47c96a80c551eab5" network for pod "coredns-757569d647-qj8ts": networkPlugin cni failed to teardown pod "coredns-757569d647-qj8ts_kube-system" network: error getting ClusterInformation: Get https://[10.31.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
使用了各种办法对比了coredns生成的密文
kubectl get secrets -n kube-system coredns-token-xc8kc -o yaml
发现和主机上的/etc/kubernetes/admin.conf文件中记录的ca密文是一模一样。无法正常访问到kube-apiserver的服务。
使用ipvsadm -Ln命令查看并没有发现什么问题。
最后解决的办法是,把admin.conf中的ca密文解密。certificate-authority-data: 后面的内容复制到一个文本中。比如ca.txt,然后使用base64 -d ./ca.txt命令还原证书。然后把证书保存到/etc/pki/ca-trust/source/anchors/kube.pem中。修改coredns的deploy挂载目录。添加pki挂载
4.2 解决方案
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
五、k8s滚动更新的异常
5.1 现象
k8s集群进行滚动更新发布时未生效,通过kube-apiserver查看发现这个Deployment已经是最新版,但是这个最新版的Pod并未创建出来
针对该现象,开始猜测可能是kube-controller-manager的bug导致,但是观察controller-manager日志并未发现明显异常,第一次调高controller-manager的日志等级并进行重启操作之后,似乎由controller-manager并没有watch到这个更新事件,我们仍然没有发现问题所在。此时 观察kube-appserver,诡异的事情发生了 之前的Deployment正常滚动更新了
5.2 问题原因
ETCD数据不一致
由于kube-apiserver的日志中同样无法提取出能够帮助解决问题的有用信息,起初我们只能猜测可能是kube-apiserver的缓存更新异常导致的。正要从这个切入点解决问题的时候,有一个诡异的问题 创建的pod无法通过kubectl 查询到 那么问题来了 kube api的list操作是没有缓存的,数据是kube-apiserver直接从etcd拉取返回给客户端的 ,初步判断可能是etcd这里有问题
etcd是cap架构,一个强一致性的kv存储,在写操作成功的情况下两次请求不应该读取到不一样的数据,我们通过etcdctl直接查询了etcd的集群状态和集群数据,得到的结果是集群状态正常 Raftindex一致,观察etcd的日志也没有发现报错信息,唯一可疑的地方是3个节点的dbsize差别比较大,接着我们又将client访问的endpoint指定为不同节点地址来查询每个key的数量,结果发现3个节点返回的key数量不一致,并且直接通过etcdctl查询刚创建的pod,发现访问某些endpoint可以查到该pod,而访问其他endpoint则查不到 至此,基本可以确定etcd集群的节点存在数据不一致现象
5.3 问题分析和排查过程
初步验证
通常集群正常运行没有外部变更,一般不会出现这么严重的问题,查询etcd集群近几天的发布记录时发现故障前一天对该集群的一次发布中,由于之前dbsize配置不合理 导致db被写满集群无法写入新的数据,为此运维人员更新了集群dbsize和compaction相关配置并且重启了etcd 重启后继续对ectd手动执行了compact和defrag操作来压缩db空间
通过上述场景 我们基本可以初步判断一下几个可疑的触发条件
1.dbsize满
2.dbsize和compaction配置更新
3.compaction操作和defrag操作
4.重启etcd
标签:ago,k8s,Kubernetes,192.168,hours,kubelet,报错,coredns,kube
From: https://blog.51cto.com/u_13643065/6168619