首页 > 其他分享 >k8s初始化报错[kubelet-check] Initial timeout of 40s passed.

k8s初始化报错[kubelet-check] Initial timeout of 40s passed.

时间:2022-12-10 10:35:40浏览次数:64  
标签:systemd passed Initial kubelet 10248 报错 cgroup cgroupfs check

k8s初始化报错[kubelet-check] Initial timeout of 40s passed.

k8s.gcr.io/pause:3.6

The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

apiVersion

kubeadm config images pull

kubeadm config images list

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

 

进入/etc/systemd/system/kubelet.service.d,查看是否存在10-kubeadm.conf,在文件末尾添加

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"

command failed" err="failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock

kuberlet服务启动报错:"Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

 

经过分析后发现,是因为“kebernetes默认设置cgroup驱动为systemd,而docker服务的cgroup驱动为cgroupfs”,有两种决解决方式,方式一,将docker的服务配置文件修改为何kubernetes的相同,方式二是修改kebernetes的配置文件为cgroupfs,这里采用第一种。

修改docker服务的配置文件,“/etc/docker/daemon.json ”文件,添加如下

"exec-opts": ["native.cgroupdriver=systemd"]

重启dokcer服务:

sudo systemctl daemon-reload
sudo systemctl restart docker

重启kuberlet:

systemctl restart kubelet

那么什么是cgroup?什么是systemd和cfgroupfs?

cgroup 驱动

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组来强制执行 为 Pod 和容器管理资源 并为诸如 CPU、内存这类资源设置请求和限制。若要对接控制组,kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

可用的 cgroup 驱动有两个:

cgroupfs 驱动

cgroupfs 驱动是 kubelet 中默认的 cgroup 驱动。当使用 cgroupfs 驱动时, kubelet 和容器运行时将直接对接 cgroup 文件系统来配置 cgroup。

当 systemd 是初始化系统时,  推荐使用 cgroupfs 驱动,因为 systemd 期望系统上只有一个 cgroup 管理器。 此外,如果你使用 cgroup v2, 则应用 systemd cgroup 驱动取代 cgroupfs

systemd cgroup 驱动

当某个 Linux 系统发行版使用 systemd 作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(cgroup),并充当 cgroup 管理器。

systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。 因此,如果你 systemd 用作初始化系统,同时使用 cgroupfs 驱动,则系统中会存在两个不同的 cgroup 管理器。

同时存在两个 cgroup 管理器将造成系统中针对可用的资源和使用中的资源出现两个视图。某些情况下, 将 kubelet 和容器运行时配置为使用 cgroupfs、但为剩余的进程使用 systemd 的那些节点将在资源压力增大时变得不稳定。

当 systemd 是选定的初始化系统时,缓解这个不稳定问题的方法是针对 kubelet 和容器运行时将 systemd 用作 cgroup 驱动。

要将 systemd 设置为 cgroup 驱动,需编辑 KubeletConfiguration 的 cgroupDriver 选项,并将其设置为 systemd。例如:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver: systemd

如果你将 systemd 配置为 kubelet 的 cgroup 驱动,你也必须将 systemd 配置为容器运行时的 cgroup 驱动。 参阅容器运行时文档,了解指示说明。例如:

标签:systemd,passed,Initial,kubelet,10248,报错,cgroup,cgroupfs,check
From: https://www.cnblogs.com/ruiy/p/16970886.html

相关文章