1、确保已经将 SELinux 设置为permissive模式:
这些说明适用于 Kubernetes 1.31。
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
2、下载并安装相关软件包
sudo yum install -y yum-utils
// 在可联网环境将相关包下载到本地,后续可以在无网络环境进行安装,如果无法找到对应包,
// 请转到10章节,配置官方镜像库
sudo yumdownloader kubelet-1.31.0 kubeadm-1.31.0 kubectl-1.31.0 --disableexcludes=kubernetes
// 下载依赖项
yumdownloader kubernetes-cni cri-tools --disableexcludes=kubernetes
// 安装依赖项
sudo yum localinstall -y kubernetes-cni-1.5.0-150500.2.1.aarch64.rpm cri-tools-1.31.1-150500.1.1.aarch64.rpm
// 安装 kubelet、kubeadm、kubectl
sudo yum localinstall -y kubelet-1.31.0-150500.1.1.aarch64.rpm kubeadm-1.31.0-150500.1.1.aarch64.rpm kubectl-1.31.0-150500.1.1.aarch64.rpm
// 启动
sudo systemctl enable kubelet && sudo systemctl start kubelet
// 验证
kubelet --version
kubeadm version
kubectl version --client
3、使用 Kubeadm 初始化 Kubernetes
接下来,你可以使用 kubeadm 来初始化 Kubernetes 集群。记得替换命令中的参数以适应你的环境:
sudo kubeadm config print init-defaults > kubeadm-init.yaml
修改kubeadm配置文件:该文件需要修改的地方:将advertiseAddress: 1.2.3.4修改为当前服务器IP地址,比如使用10.211.55.3作为master,就修改advertiseAddress: 10.211.55.3。将imageRepository: k8s.gcr.io修改为imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers将networking: podSubnet:分配Pod分配的子网网断,定义Pod的IP地址范围。
apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token # 为 Kubeadm 引导节点设置默认组
token: abcdef.0123456789abcdef # 设定的引导 token,用于节点加入集群
ttl: 24h0m0s # token 的有效期为 24 小时
usages:
- signing # 用于签名
- authentication # 用于身份验证
kind: InitConfiguration # 初始化配置的种类
localAPIEndpoint:
advertiseAddress: 10.211.55.51 # 本地API的广告地址,通常是 master 节点的IP地址
bindPort: 6443 # 本地API的绑定端口,默认为 6443
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock # 指定使用 cri-dockerd 作为容器运行时
imagePullPolicy: IfNotPresent # 镜像拉取策略,若本地无镜像则拉取
imagePullSerial: true # 指定镜像拉取是否串行
name: kmaster # 设置节点的主机名
taints: null # 清除节点上的污点,允许任何 Pod 调度到此节点上
timeouts:
controlPlaneComponentHealthCheck: 4m0s # 控制面组件健康检查的超时时间
discovery: 5m0s # 发现节点的超时时间
etcdAPICall: 2m0s # etcd API 调用超时
kubeletHealthCheck: 4m0s # kubelet 健康检查超时时间
kubernetesAPICall: 1m0s # Kubernetes API 调用超时
tlsBootstrap: 5m0s # TLS 引导超时时间
upgradeManifests: 5m0s # 升级清单处理超时时间
---
apiServer: {} # Kube API Server 的相关配置
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s # CA 证书的有效期(大约10年)
certificateValidityPeriod: 8760h0m0s # 其他证书的有效期(大约1年)
certificatesDir: /etc/kubernetes/pki # 证书存储目录
clusterName: kubernetes # 集群名称
controllerManager: {} # 控制器管理器相关配置
dns: {} # DNS 配置
encryptionAlgorithm: RSA-2048 # 证书加密算法,使用 RSA 2048位
etcd:
local:
dataDir: /var/lib/etcd # etcd 数据存储路径
imageRepository: registry.k8s.io # Kubernetes 镜像仓库地址
kind: ClusterConfiguration # 集群配置的种类
kubernetesVersion: 1.31.0 # Kubernetes 版本
networking:
dnsDomain: cluster.local # 集群的 DNS 域名
serviceSubnet: 10.96.0.0/12 # Service 子网范围
podSubnet: 10.244.0.0/16 # Pod 网络范围(新增的配置)
proxy: {} # 代理相关配置
scheduler: {} # 调度器相关配置
执行初始化kubeadm配置文件
sudo kubeadm init --config kubeadm-init.yaml
// 如果初始化卡住4分钟处,则
sudo vi /usr/lib/systemd/system/kubelet.service
// 修改以下内容
ExecStart=/usr/local/bin/kubelet --container-runtime-endpoint=unix:///run/cri-dockerd.sock
// 并确保文件里的内容是否如下
sudo vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=node --pod-infra-container-image=registry.k8s.io/pause:3.10"
3.1 如果报错:
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "kmaster" not found To see the stack trace of this error execute with --v=5 or high
// 则进行以下操作
sudo kubeadm reset -f --cri-socket /var/run/cri-dockerd.sock
sudo systemctl stop kubelet
sudo rm -rf /etc/cni/net.d
sudo rm -rf /var/lib/kubelet/*
sudo rm -rf /var/lib/etcd
sudo systemctl start kubelet
sudo kubeadm init --config kubeadm-init.yaml
3.2 如果报错:
[kubernetes@Kubernetes02 ~]$ sudo kubeadm join 10.211.55.43:6443 --token lbik51.q7ti5ekg4un7x8a1 --discovery-token-ca-cert-hash sha256:85bc771d5cc7b8406673dfc6e4e0cef00bd6d5de199454fd9c2314e4ae262e9a [preflight] Running pre-flight checks [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node [WARNING FileExisting-crictl]: crictl not found in system path [WARNING FileExisting-socat]: socat not found in system path error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileExisting-conntrack]: conntrack not found in system path [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher
则,安装 conntrack-tools 和socat
sudo yum install -y epel-release
sudo yumdownloader --resolve conntrack-tools
sudo yum localinstall conntrack-tools-*.rpm
sudo yumdownloader --resolve socat
sudo yum localinstall socat-*.rpm
并安装crictl:
1、首先,解压 crictl 的压缩包。你可以使用 tar 命令:
tar -xvf crictl-v1.31.1-linux-arm64.tar
这会解压出包含 crictl 可执行文件的目录。
2、移动 crictl 到系统路径
解压后,通常会得到一个 crictl 可执行文件。将其移动到系统的 PATH 目录中,例如 /usr/local/bin:
sudo mv crictl /usr/local/bin/
3、检查安装
确认 crictl 已正确安装并在 PATH 中。你可以使用 which 命令来检查:
which crictl
应该会显示 /usr/local/bin/crictl 的路径。
4、设置权限
确保 crictl 可执行文件有执行权限:
sudo chmod +x /usr/local/bin/crictl
5、验证安装
验证 crictl 是否能正常工作:
crictl --version
这应该会输出 crictl 的版本信息。
3.3 如果init的过程中报错,再次init的时候出现如下情况:
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
则进行以下操作:
sudo kubeadm reset
sudo rm -rf /etc/kubernetes
sudo rm -rf /var/lib/etcd
4、从节点加入主节点
主节点init后,会出现以下信息
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.211.55.56:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3a45611fc0a8bca7940837c5868e05fd1923b3c561776d1b111b65212993a8d0
使用以下命令加入集群
sudo kubeadm join 10.211.55.56:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3a45611fc0a8bca7940837c5868e05fd1923b3c561776d1b111b65212993a8d0 \
--cri-socket unix:///var/run/cri-dockerd.sock
5、配置 kubectl
初始化完成后,按照输出的提示配置 kubectl:
sudo mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6、部署网络插件
安装 Flannel 网络插件(或者其他你选择的网络插件):
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
或安装Calico网络插件
查看calico和Kubernetes versions对应关系https://docs.tigera.io/calico/3.26/getting-started/kubernetes/requirements
获取配置:
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml
编辑calico.yaml,增加- name: IP_AUTODETECTION_METHOD,修改CALICO_IPV4POOL_IPIP的VALUE为"Never"
配置Calico网络段,将- name: CALICO_IPV4POOL_CIDR配置 value: “10.211.55.0/24”
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp" # 指定集群类型为 Kubernetes 并启用 BGP(边界网关协议)
# IP automatic detection
- name: IP_AUTODETECTION_METHOD
value: "interface=en.*" # 自动检测网络接口,使用匹配 en.* 的网卡,如 en0、en1
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect" # 自动检测 BGP IP 地址
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never" # 禁用 IPIP 隧道模式(只启用 BGP)
# Configure Calico subnet
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.211.55.0/24" # 配置 Calico 的 IPv4 地址池,Pod 的 IP 将从该范围内分配
修改完后应用calico.yaml配置
7、补充
yum仓库镜像设置
# 国内镜像配置(国内建议配置)
sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#配置完成后,你可以使用以下命令来清理YUM缓存并生成新的缓存:
sudo yum clean all
sudo yum makecache
# 官网镜像配置
sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
专栏目录:
1、k8s集群部署:环境准备
2、k8s集群部署:容器运行时
3、k8s集群部署:安装 kubeadm
本文参考:http://www.weifos.com/Home/TechStack/1807017272963891200
标签:kubernetes,--,sudo,kubelet,集群,crictl,kubeadm,k8s From: https://blog.csdn.net/qq_34322136/article/details/142063018