首页 > 系统相关 >3.3 Ubuntu24使用kubeadm部署高可用K8S集群

3.3 Ubuntu24使用kubeadm部署高可用K8S集群

时间:2024-06-21 20:58:11浏览次数:25  
标签:haproxy -- sudo etc 3.3 kubeadm K8S config

Ubuntu24使用kubeadm部署高可用K8S集群

使用kubeadm部署一个k8s集群,3个master+1个worker节点。

1. 环境信息

  • 操作系统:ubuntu24.04
  • 内存: 2GB
  • CPU: 2
  • 网络: 能够互访,能够访问互联网
hostnameip备注
k8s-master1192.168.0.51master1
k8s-master2192.168.0.52master2
k8s-master3192.168.0.53master3
k8s-node1192.168.0.54worker1

最终目标部署一个HA Kubernetes 集群,使用堆叠(stacked)控制平面节点,其中 etcd 节点与控制平面节点共存。

ha-structure

关于高可用的详细说明可以参考官网:https://v1-27.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology

2. 准备工作

在所有节点(包括 Master 和 Worker 节点)上执行以下步骤。

linux基础配置

# 时间同步
sudo apt -y install chrony 
sudo systemctl enable chrony && sudo systemctl start chrony
sudo chronyc sources -v

# 设置时区
sudo timedatectl set-timezone Asia/Shanghai

# 设置主机名
sudo hostnamectl set-hostname master1 # 分别设置

# 设置hosts文件
cat << EOF | sudo tee /etc/hosts 
192.168.0.50  vip.cluster.local
192.168.0.51 master1
192.168.0.52 master2
192.168.0.53 master3
192.168.0.54 worker1
EOF

# 免密登录-master1执行
ssh-keygen
ssh-copy-id 192.168.0.51
ssh-copy-id 192.168.0.52
ssh-copy-id 192.168.0.53
ssh-copy-id 192.168.0.54

# 禁用swap
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
# 或者
sudo systemctl disable --now swap.img.swap
sudo systemctl mask swap.target

# 禁用防火墙
sudo ufw disable
sudo ufw status

内核参数调整

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# 加载模块
sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数。

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1   # 将桥接的IPv4 流量传递到iptables 的链
net.ipv4.ip_forward                 = 1   # 启用 IPv4 数据包转发
EOF

# 应用 sysctl 参数
sudo sysctl --system

# 通过运行以下指令确认 br_netfilter 和 overlay 模块被加载
sudo lsmod | grep br_netfilter
sudo lsmod | grep overlay

# 通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

配置ipvs

# 安装
sudo apt install -y ipset ipvsadm

# 内核加载ipvs
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 加载模块
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack

安装容器运行时

本文选用containerd作为容器运行时:

# 安装containerd
sudo apt install -y containerd

修改Containerd的配置文件

配置containerd使用cgroup的驱动为systemd,并修改沙箱镜像源:

# 生成containetd的配置文件
sudo mkdir -p /etc/containerd/
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
# 修改/etc/containerd/config.toml,修改SystemdCgroup为true
vi  /etc/containerd/config.toml

# 或者使用下面的替换命令
sudo sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
sudo cat /etc/containerd/config.toml | grep SystemdCgroup

# 修改沙箱镜像源
sudo sed -i "s#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sudo cat /etc/containerd/config.toml | grep sandbox_image

关于cgroup驱动的说明:

可用的 cgroup 驱动有两个,cgroupfs和systemd。本文使用的ubuntu使用systemd作为初始化系统程序,因此将kubelet和容器运行时的cgroup驱动都配置为systemd。

关于该部分的说明可以参考:

https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#cgroupfs-cgroup-driver

配置可以参考:

https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver

确保容器运行时和 kubelet 所使用的是相同的 cgroup 驱动,否则 kubelet 进程会失败。

安装 kubeadm、kubelet 和 kubectl

# 安装依赖
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# 添加kubernetes的key
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 添加kubernetes apt仓库,使用阿里云镜像源
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 更新apt索引
sudo apt update

# 查看版本列表
apt-cache madison kubeadm

# 不带版本默认会安装最新版本,本文安装的版本为1.28.2
sudo apt-get install -y kubelet kubeadm kubectl

# 锁定版本,不随 apt upgrade 更新
sudo apt-mark hold kubelet kubeadm kubectl

# kubectl命令补全
sudo apt install -y bash-completion
kubectl completion bash | sudo tee /etc/profile.d/kubectl_completion.sh > /dev/null
. /etc/profile.d/kubectl_completion.sh

说明:

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

3. 高可用方案部署keepalived、haproxy

安装keepalived、haproxy

配置高可用,所有master节点安装并配置keepalived、haproxy。

$ sudo apt install keepalived haproxy

# 确保服务处于自启动状态
test@master1:/etc/keepalived$ systemctl is-enabled keepalived
enabled
test@master1:/etc/keepalived$ systemctl is-enabled haproxy.service
enabled

配置haproxy

所有master节点都配置,内容相同。主要增加frontend apiserver和backend apiserverbackend两个部分的内容:

test@master1:~$ cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:16443
    mode tcp
    option tcplog
    default_backend apiserverbackend

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserverbackend
    option httpchk

    http-check connect ssl
    http-check send meth GET uri /healthz
    http-check expect status 200

    mode tcp
    balance     roundrobin

    server master1 192.168.0.51:6443 check verify none
    server master2 192.168.0.52:6443 check verify none
    server master3 192.168.0.53:6443 check verify none

重启haproxy服务,之后haproxy会在16443端口启动监听。

sudo systemctl restart haproxy

配置keepalived

增加keepalive配置文件:

test@master1:/etc/keepalived$ sudo cp keepalived.conf.sample keepalived.conf
test@master1:/etc/keepalived$ cat keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state MASTER  # master1为MASTER,其他两台为BACKUP
    interface ens33  # 网卡名称
    virtual_router_id 51
    priority 101    # 优先级,master1为101,其他两台为100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.50 # 虚ip,保持一致
    }
    track_script {
        check_apiserver
    }
}

三台master节点都需要配置,其他两台注意上面几个注释的地方根据要求进行修改。

健康检查脚本如下,三台master节点都需要配置,内容相同:

root@master1:/etc/keepalived# cat check_apiserver.sh

#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl -sfk --max-time 2 https://localhost:16443/healthz -o /dev/null || errorExit "Error GET https://localhost:16443/healthz"

root@master1:/etc/keepalived$ sudo chmod +x check_apiserver.sh

重启三台节点的keepalived服务,虚ip会落在master1节点上。

关于该部分的配置可以参考官方文档,官方高可用方案中也可以将haproxy和keepalived作为pod运行:

https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing

4. 安装K8S集群

准备镜像

3个master节点依次执行,提前拉取所需要的容器镜像:

# 查看镜像版本
test@ubuntusvr:~$ kubeadm config images list
...

# 查看阿里云镜像
test@ubuntusvr:~$ kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
...

# 下载阿里云镜像
root@master1:~# kubeadm config images pull --kubernetes-version=v1.28.2 --image-repository registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

备注:

阿里云有两个镜像仓库可用

  • registry.aliyuncs.com/google_containers
  • registry.cn-hangzhou.aliyuncs.com/google_containers

初始化 Kubernetes 集群

初始化支持命令行和配置文件两种方式。

配置文件

生成配置文件模板:

kubeadm config print init-defaults > init.default.yaml

init.default.yaml文件内容如下,根据当前环境信息修改:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.51
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: vip.cluster.local:16443   # 高可用
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.200.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
初始化控制台节点

初始化第一台控制节点,配置文件方式:

sudo kubeadm init --config init.default.yaml --upload-certs

# 初始化完成后的输出
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
# 控制节点加入
  kubeadm join vip.cluster.local:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:675be496f04667e8126fdcd087d01f96bc168643b62c296dc4950c84524192ac \
        --control-plane --certificate-key 72d46e58942154b2803ba7cceb26952cf425142b28baf257b266bc68c8dc01f3

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
# 工作节点加入
kubeadm join vip.cluster.local:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:675be496f04667e8126fdcd087d01f96bc168643b62c296dc4950c84524192ac

部署成功后配置kubeconfig文件:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果初始化失败,可以排错后执行sudo kubeadm reset重置后进行重试。

命令行方式:

sudo kubeadm init \
--kubernetes-version=v1.28.2  \
--image-repository registry.aliyuncs.com/google_containers --v=5 \
--control-plane-endpoint vip.cluster.local:16443 \
--upload-certs \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

相关错误排查位置:

  1. journalctl -xeu kubelet | grep Failed
  2. /var/lib/kubelet/config.yaml
  3. /etc/kubernetes/kubelet.conf
  4. 检查域名和vip的解析及是否可ping通

安装网络插件

部署网络插件,选用flannel网络插件:

```shell
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get pods -n kube-flannel
```

加入另外两个master节点

在另外两个节点执行加入命令:

  kubeadm join vip.cluster.local:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:675be496f04667e8126fdcd087d01f96bc168643b62c296dc4950c84524192ac \
        --control-plane --certificate-key 72d46e58942154b2803ba7cceb26952cf425142b28baf257b266bc68c8dc01f3
  • –control-plane 标志通知 kubeadm join 控制平台。
  • –certificate-key … 将导致从集群中的 kubeadm-certs Secret 下载控制平面证书并使用给定的密钥进行解密。

备注:

控制节点加入完毕后记得配置kubeconfig文件配置。

另外两个节点也会拉取镜像,可以将master1节点导出并导入:

# 打包master1镜像,k8s.io
ctr -n k8s.io images export  k8s-v1.28.2.tar registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/coredns/coredns:v1.10.1 registry.k8s.io/etcd:3.5.9-0

# master2,master3导入镜像
ctr -n k8s.io image import k8s-v1.28.2.tar

最终三台控制节点添加完毕,flannel会自动安装到添加进来的节点,确保节点状态都为ready:

root@master1:~# kubectl get node
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   37m   v1.28.2
master2   Ready    control-plane   18m   v1.28.2
master3   Ready    control-plane   17m   v1.28.2

加入worker节点

kubeadm join vip.cluster.local:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:675be496f04667e8126fdcd087d01f96bc168643b62c296dc4950c84524192ac

最终节点状态:

root@master1:~# kubectl get node
NAME      STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   44m     v1.28.2
master2   Ready    control-plane   25m     v1.28.2
master3   Ready    control-plane   24m     v1.28.2
worker1   Ready    <none>          4m45s   v1.28.2

高可用集群搭建完成后,执行重启测试、单点故障测试,集群均能访问正常。

加入worker节点

kubeadm join vip.cluster.local:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:675be496f04667e8126fdcd087d01f96bc168643b62c296dc4950c84524192ac

最终节点状态:

root@master1:~# kubectl get node
NAME      STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   44m     v1.28.2
master2   Ready    control-plane   25m     v1.28.2
master3   Ready    control-plane   24m     v1.28.2
worker1   Ready    <none>          4m45s   v1.28.2

高可用集群搭建完成后,执行重启测试、单点故障测试,集群均能访问正常。

标签:haproxy,--,sudo,etc,3.3,kubeadm,K8S,config
From: https://blog.csdn.net/codelearning/article/details/139869446

相关文章

  • Centos7.9使用kubeadm部署K8S 1.27.6集群环境(内网通过代理部署)
    Centos7.9使用kubeadm部署K8S1.27.6集群环境(内网通过代理部署)在内网借助代理服务器,使用kubeadm部署一个k8s集群,单master+2worker节点,K8S版本为1.7.6,使用containerd作为容器运行时。1.环境信息操作系统:CentOS7.9.2009内存:8GBCPU:4网络:节点通过代理进行访问。host......
  • Centos7.9使用kubeadm部署K8S 1.27.6集群环境(内网通过代理部署)
    Centos7.9使用kubeadm部署K8S1.27.6集群环境(内网通过代理部署)在内网借助代理服务器,使用kubeadm部署一个k8s集群,单master+2worker节点,K8S版本为1.7.6,使用containerd作为容器运行时。1.环境信息操作系统:CentOS7.9.2009内存:8GBCPU:4网络:节点通过代理进行访问。ho......
  • Ubuntu24使用kubeadm部署高可用K8S集群
    Ubuntu24使用kubeadm部署高可用K8S集群使用kubeadm部署一个k8s集群,3个master+1个worker节点。1.环境信息操作系统:ubuntu24.04内存:2GBCPU:2网络:能够互访,能够访问互联网hostnameip备注k8s-master1192.168.0.51master1k8s-master2192.168.0.52master......
  • Jenkins + K8s 实现动态 slave 配置
    环境介绍本次jenkins部署在本地服务器上,下面我们开始动态slave配置。k8s创建RBAC##首先需要创建命名空间pipeline[root@master1~]#catpipeline-acount.yamlapiVersion:v1kind:ServiceAccountmetadata:name:jenkins-slavenamespace:pipeline---ki......
  • k8s探针类型及探针配置
    探针类型:存活探针(LivenessProbe):用于判断容器是否存活(running状态),如果LivenessProbe探针探测到容器不健康,则kubelet杀掉该容器,并根据容器的重启策略做相应的处理。如果一个容器不包含LivenessProbe探针,则kubelet认为该容器的LivenessProbe探针返回的值永远是“Success”。......
  • k8s容器启动不了,一直重启, 报红提示Not Ready
    k8s容器启动不了,一直重启,报红提示NotReady反复多次重启后,才能够启动成功。发现是启动时间过长,不断达到了失败阈值,于是会不断重启。将failureThreshold、initialDelaySeconds、periodSeconds这几个参数设置大一些,就可以启动了。k8s探针类型及探针配置:详情见:https://blo......
  • k8s网络
     servicecidr,podcidr和配置文件之间的关系  kubeadminit--apiserver-advertise-address=10.111.40.111--image-repositoryregistry.aliyuncs.com/google_containers--kubernetes-versionv1.28.0--service-cidr=10.123.0.0/19--pod-network-cidr=10.124.0.0/19-......
  • 12.1.k8s中的pod数据持久化-pv与pvc资源及动态存储StorageClass
    目录一、pc与pvc的概念二、pvc与pv初体验1,准别nfs环境1.1.所有节点安装nfs工具1.2.harbor节点编辑nfs配置文件 2,创建3个pv资源3,harbor节点创建pv对应的nfs存储路径 4,创建pvc关联pv5,创建pod引入pvc6,编辑index访问文件到harbor存储目录下7,浏览器访问测试三、Storag......
  • k8s命令
    k8s组成kubectl//客户端命令行工具,系统操作入口kube-apiserver//以RESTAPI服务提供接口,作为系统的控制入口kube-controller-manager//执行整个系统的后台任务,包括节点状况状态,Pod个数,Pods和Service的关联等kube-scheduler//负责节点资源管理,接受来自kube-apiserver创......
  • 【K8s】专题五(4):Kubernetes 配置之 ConfigMap 与 Secret 使用
    以下内容均来自个人笔记并重新梳理,如有错误欢迎指正!如果对您有帮助,烦请点赞、关注、转发!欢迎扫码关注个人公众号!目录一、ConfigMap使用方式1、注入环境变量2、挂载配置文件二、Secret使用方式1、注入环境变量2、设置镜像密钥3、设置TLS凭据一、ConfigMap使用方......