手动搭建高可用的 kubernetes 集群(v1.31)
1、环境准备
1.1 集群规划(节约资源,可按需配置)
主机名称 | 物理IP | 说明 |
---|---|---|
master | 172.16.20.10 | Master节点 |
worker | 172.16.20.11 | Node节点 |
1.2 安装依赖包(双节点执行)
[root@master ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git socat
1.3 配置映射(双节点执行)
[root@master ~]# cat >> /etc/hosts <<EOF
172.16.20.10 master
172.16.20.11 worker
EOF
1.4 免密登录(master节点执行)
[root@master ~]# ssh-keygen
[root@master ~]# ssh-copy-id root@master
[root@master ~]# ssh-copy-id root@worker
1.5 环境准备(双节点执行)
# 关闭防火墙
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
# 禁用SELinux
[root@master ~]# sed -i 's/SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 禁用交换分区
[root@master ~]# swapoff -a
[root@master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
# 设置时区
[root@master ~]# timedatectl set-timezone Asia/Shanghai
# 同步时间
[root@master ~]# ntpdate time.windows.com
# 配置内核转发
[root@master ~]# cat >> /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 配置生效
[root@master ~]# sysctl --system
# 加载系统模块
[root@master ~]# modprobe br_netfilter
[root@master ~]# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 155432 1 br_netfilter
2、安装containerd(双节点执行)
# 下载源
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 替换成阿里云的镜像
[root@master ~]# sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# 安装 containerd
[root@master ~]# yum install -y containerd
# 重建containerd配置文件
[root@master ~]# cp /etc/containerd/config.toml{,bak}
[root@master ~]# containerd config default > /etc/containerd/config.toml
# 配置 systemd cgroup 驱动
[root@master ~]# vim /etc/containerd/config.toml
[root@master ~]# sed -n "63p;127p" /etc/containerd/config.toml
# 替换成阿里源
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
# 开启Cgroup
SystemdCgroup = true
# 启动containerd并设置开机启动
[root@master ~]# systemctl enable --now containerd
3、安装Kubernetes
3.1 安装Kubernetes组件(双节点执行)
# 添加Kubernetes源
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
# 安装kubeadm、kubelet和kubectl
[root@master ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 启动kubelet并设置开机启动
[root@master ~]# systemctl enable --now kubelet
3.2 初始化Kubernetes集群(Master节点执行)
# 使用kubeadm初始化集群(请根据您的实际情况替换<your-control-plane-endpoint>)
[root@master ~]# kubeadm init \
--apiserver-advertise-address=172.16.20.10 \
--image-repository=registry.k8s.io \
--kubernetes-version=v1.31.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
# 配置kubectl以访问集群
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.3 加入Kubernetes集群(Worker节点执行)
在Master节点上执行kubeadm token create --print-join-command获取加入集群的命令,然后在每个Worker节点上执行该命令。
[root@worker ~]# kubeadm join 172.16.20.10:6443 --token cddh3x.lc2xhd19l1d134le \
--discovery-token-ca-cert-hash sha256:fdf02257ce01071cd1e61bde518e417fae2e86e3a7650d81eabbb75e74d1ff51
4、安装网络插件(双节点执行)
选择 Calico 还是 Flannel 主要取决于你的具体需求:
- 如果运行一个大型集群,需要精细的网络策略控制,并且愿意接受更复杂的配置过程,那么 Calico 可能是一个更好的选择。
- 如果集群规模较小,对网络性能要求不是特别高,而且希望有一个易于管理和配置的网络解决方案,那么 Flannel 可能更适合你。
# 应用YAML文件以部署Flannel(网络模式二选一)
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 应用YAML文件以部署Calico(网络模式二选一)
[root@master ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 验证集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 5m11s v1.31.1
worker Ready <none> 3m7s v1.31.1
# 查看集群信息
[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://172.16.20.10:6443
CoreDNS is running at https://172.16.20.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# 检查集群的组件状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
标签:kubernetes,containerd,etc,master,v1.31,集群,root,节点
From: https://www.cnblogs.com/ywb123/p/18436407