首页 > 其他分享 >基于kube-vip创建k8s高可用集群

基于kube-vip创建k8s高可用集群

时间:2024-07-17 17:41:42浏览次数:7  
标签:kube -- system vip k8s 2h Running

 

所使用的环境如下:
Ubuntu Server 20.04 LTS (自从CentOS变成CentOS Stream后就转用Debian/Ubuntu了)
containerd 1.5.5 (k8s 1.24之后就不再支持docker了,因此改用containerd)
Kubernetes v1.23.5
kube-vip v0.4.3 (这里为了简单部署使用L2 ARP方式)

+-------------+---------------+--------+
|  Hostname   |   IP Address  |  Role  |
+-------------+---------------+--------+
| k8s         | 192.168.1.210 | VIP    |
+-------------+---------------+--------+
| k8s-master1 | 192.168.1.211 | Master |
+-------------+---------------+--------+
| k8s-master2 | 192.168.1.212 | Master |
+-------------+---------------+--------+
| k8s-master3 | 192.168.1.213 | Master |
+-------------+---------------+--------+
| k8s-worker1 | 192.168.1.214 | Worker |
+-------------+---------------+--------+
| k8s-worker2 | 192.168.1.215 | Worker |
+-------------+---------------+--------+
| k8s-worker3 | 192.168.1.216 | Worker |
+-------------+---------------+--------+

使用kube-vip来构建master节点的vip,这样就不需要外部负载均衡器或者keepalived+haproxy之类的设置了。架构图如下:

1. 环境准备

配置相关内核模块

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

关闭swap

sudo swapoff -a
sudo sed -i '/swap/s/^/#/' /etc/fstab

安装所需软件包及containerd

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl chrony jq containerd

配置containerd

sudo mkdir -p /etc/containerd
containerd config default | \
sed -e 's,SystemdCgroup = .*,SystemdCgroup = true,' | \
  sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

2. 安装K8S所需工具

安装kubectl, kubeadm和kubectl,并配置containerd为k8s默认CRI

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo crictl config runtime-endpoint /run/containerd/containerd.sock

确认安装的版本,本文使用的是v1.23.5版本

kubectl version --client --short

添加命令补全

crictl completion > /etc/bash_completion.d/crictl
kubectl completion bash > /etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm

从国内站点下载容器镜像(所有节点)

sudo cat << EOF > ~/pull_k8s_images.sh
#!/bin/bash
#Made by www.ebanban.com
registry=registry.cn-hangzhou.aliyuncs.com/google_containers
images=\`kubeadm config images list |awk -F '/' '{print \$NF}'\`

for image in \$images
do
  if [[ \$image =~ "coredns" ]]; then
    ctr -n k8s.io image pull \${registry}/\$image
    if [ \$? -eq 0 ]; then
      ctr -n k8s.io image tag \${registry}/\$image k8s.gcr.io/coredns/\$image
      ctr -n k8s.io image rm \${registry}/\$image
    else
      echo "ERROR: download failed,\$image"
    fi
  else
    ctr -n k8s.io image pull \${registry}/\$image
    if [ \$? -eq 0 ]; then
      ctr -n k8s.io image tag \${registry}/\$image k8s.gcr.io/\$image
      ctr -n k8s.io image rm \${registry}/\$image
    else
      echo "ERROR: download failed,\$image"
    fi
  fi
done
EOF
sudo bash ~/pull_k8s_images.sh
sudo rm ~/pull_k8s_images.sh

3. 创建Kube-vip的静态pod

kube-vip的pod需要先于k8s集群创建,因此我们采用static pod方式来创建kube-vip的pod。在三台master服务器上执行以下命令(VIP和网卡名根据实际情况修改)

export VIP=192.168.1.210

export INTERFACE=ens32

KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")

alias kube-vip="ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

kube-vip manifest pod \
    --interface $INTERFACE \
    --address $VIP \
    --controlplane \
    --services \
    --arp \
    --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml

4. 部署第一台k8s master节点

使用kubeadm init来初始化第一个节点

export VIP=192.168.1.210
sudo kubeadm init --control-plane-endpoint "$VIP:6443" --upload-certs

执行完成后会出现以下结果。记录下其中kubeadm join开头的命令用于后续节点加入

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.211:6443 --token vwgly8.gkbh84snlmffq1k6 \
	--discovery-token-ca-cert-hash sha256:19cfbebb57eb86b75df3759929d255d2ee661d367f7b53328b9d6f209951ce9e 

执行以下命令来开始操作k8s集群,使用kubectl get pod来查看容器运行情况

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get pod -n kube-system -w

5. 部署第二、三台k8s master节点

使用第一台master节点安装后生成的命令来加入第二、三台master。使用以下命令生成join命令。

echo $(kubeadm token create --print-join-command) --control-plane --certificate-key $(kubeadm init phase upload-certs --upload-certs | sed -n '3p')

然后在第二、三台master节点执行生成的命令

(以下为示例,请根据实际进行调整)
sudo kubeadm join 192.168.1.210:6443 --token 0r35b6.pe7st8nz7hfzs5j7 \
	--discovery-token-ca-cert-hash sha256:9353b8a8b5ea759e3ffdddb805b076e087588c5ea8a2b40ddfd3a19d0e5e600f \
	--control-plane --certificate-key 77f3d44ed8b2e0b5521cb878e49bec627e2619b7ba4a2ea63b47f4c911f7e058

第二、三台加入后使用kubectl get nodes或者node状态为NotReady,这是由于还没有部署网络CNI

NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   5m     v1.23.5
k8s-master2   NotReady   control-plane,master   7m     v1.23.5
k8s-master3   NotReady   control-plane,master   7m     v1.23.5

6. 部署CNI

CNI组件本文使用Cilium
首先要安装Cilium CLI

curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}

sha256sum --check cilium-linux-amd64.tar.gz.sha256sum

sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

rm cilium-linux-amd64.tar.gz{,.sha256sum}

使用以下命令进行安装并确认状态及验证连通结果(测试部分可能由于无法连到国外网络而报错)

cilium install
cilium status --wait
cilium connectivity test

7. 加入worker节点

加入worker节点的方法与master节点类似,命令更简单点。
join命令可用kubeadm token create –print-join-command命令生成。

(以下为示例,请根据实际进行调整)
export VIP=192.168.1.210

sudo kubeadm join $VIP:6443 --token 2edvrs.b2a86ndl64kctmlu --discovery-token-ca-cert-hash sha256:9353b8a8b5ea759e3ffdddb805b076e087588c5ea8a2b40ddfd3a19d0e5e600f

本文加入了三台worker节点,完成后确认状态如下:

# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   2h    v1.23.5
k8s-master2   Ready    control-plane,master   2h    v1.23.5
k8s-master3   Ready    control-plane,master   2h    v1.23.5
k8s-worker1   Ready    <none>                 2h    v1.23.5
k8s-worker2   Ready    <none>                 2h    v1.23.5
k8s-worker3   Ready    <none>                 2h    v1.23.5
#
# kubectl get pods -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS  AGE
kube-system   cilium-4n7lv                          1/1     Running   0         2h
kube-system   cilium-bvjmn                          1/1     Running   0         2h
kube-system   cilium-dxwv9                          1/1     Running   0         2h
kube-system   cilium-operator-75d6565577-vrm67      1/1     Running   0         2h
kube-system   cilium-wkv86                          1/1     Running   0         2h
kube-system   cilium-wvdhc                          1/1     Running   0         2h
kube-system   cilium-xsq7c                          1/1     Running   0         2h
kube-system   coredns-64897985d-6n8mg               1/1     Running   0         2h
kube-system   coredns-64897985d-l5wj9               1/1     Running   0         2h
kube-system   etcd-k8s-master1                      1/1     Running   0         2h
kube-system   etcd-k8s-master2                      1/1     Running   0         2h
kube-system   etcd-k8s-master3                      1/1     Running   0         2h
kube-system   kube-apiserver-k8s-master1            1/1     Running   0         2h
kube-system   kube-apiserver-k8s-master2            1/1     Running   0         2h
kube-system   kube-apiserver-k8s-master3            1/1     Running   0         2h
kube-system   kube-controller-manager-k8s-master1   1/1     Running   0         2h
kube-system   kube-controller-manager-k8s-master2   1/1     Running   0         2h
kube-system   kube-controller-manager-k8s-master3   1/1     Running   0         2h
kube-system   kube-proxy-2cqfw                      1/1     Running   0         2h
kube-system   kube-proxy-fww5l                      1/1     Running   0         2h
kube-system   kube-proxy-hspp7                      1/1     Running   0         2h
kube-system   kube-proxy-lzvtv                      1/1     Running   0         2h
kube-system   kube-proxy-w9d56                      1/1     Running   0         2h
kube-system   kube-proxy-wm9bn                      1/1     Running   0         2h
kube-system   kube-scheduler-k8s-master1            1/1     Running   0         2h
kube-system   kube-scheduler-k8s-master2            1/1     Running   0         2h
kube-system   kube-scheduler-k8s-master3            1/1     Running   0         2h
kube-system   kube-vip-k8s-master1                  1/1     Running   0         2h
kube-system   kube-vip-k8s-master2                  1/1     Running   0         2h
kube-system   kube-vip-k8s-master3                  1/1     Running   0         2h

#
# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  2h
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   2h

8. 部署MetalLB负载均衡

对于本地部署的k8s是需要额外部署负载均衡器的,这样保证我们在expose类型为LoadBalancer类型的服务时,external ip不会一直处于Pending状态。虽然Kube-vip也可以用来实现服务的负载均衡,但我们这里部署的MetalLB使用L2的方式,BGP方式请自行研究。
使用以下命令完成MetalLB的部署

METALLBVER=$(curl -sL https://api.github.com/repos/metallb/metallb/releases | jq -r ".[0].name")

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/$METALLBVER/manifests/namespace.yaml

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/$METALLBVER/manifests/metallb.yaml

然后需要为MetalLB设置外部IP地址段,使用ConfigMap,本文使用192.168.1.221-230

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.221-192.168.1.230
EOF

9. 部署Ingress Controller

使用以下命令来部署Ingress Controller

INGRESSVER=$(curl -sL https://api.github.com/repos/kubernetes/ingress-nginx/releases | jq -r '.[] | select(.target_commitish=="main" and .prerelease==false and .draft==false) | .tag_name' | sed -n '1p')

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/$INGRESSVER/deploy/static/provider/baremetal/deploy.yaml

修改ingress-nginx-controller服务类型为LoadBalancer

kubectl get svc -n ingress-nginx ingress-nginx-controller -o yaml | \
  sed -e 's/type: NodePort/type: LoadBalancer/' | \
  kubectl apply -f -

如果要对ingress service设置指定IP,则在ingress-nginx-controller配置的spec中加入loadBalancerIP: X.X.X.X,这里我们使用metallb自动分配的ip

10. 部署Dashboard

使用以下命令来部署dashboard

DASHBOARDVER=$(curl -sL https://api.github.com/repos/kubernetes/dashboard/releases | jq -r ".[0].name")

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/$DASHBOARDVER/aio/deploy/recommended.yaml

创建Role和用户

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

生成证书和secret并创建dashboard的ingress

export INGRESSEIP=192.168.1.221

openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -keyout kubernetes-dashboard-certs.key -out kubernetes-dashboard-certs.crt -subj "/CN=$INGRESSEIP.nip.io/O=$INGRESSEIP.nip.io"

kubectl create secret tls kubernetes-dashboard-certs --key kubernetes-dashboard-certs.key --cert kubernetes-dashboard-certs.crt -n kubernetes-dashboard

cat <<EOF | sudo kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/rewrite-target: /\$2
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  rules:
  - host: \$INGRESSEIP.nip.io
    http:
      paths:
      - path: /dashboard(/|\$)(.*)
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443

  tls:
  - hosts:
    - \$VIP.nip.io
    secretName: kubernetes-dashboard-certs
  ingressClassName: nginx
EOF

获取登录的token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep kubernetes-dashboard-token | awk '{print $1}')

然后打开https://192.168.1.221.nip.io/dashboard,输入上面获得token就能登录啦

标签:kube,--,system,vip,k8s,2h,Running
From: https://www.cnblogs.com/goPush/p/18307970

相关文章

  • 在 Kubernetes 上部署 llama3
    转自:https://zhuanlan.zhihu.com/p/695534674Ollama与OpenWebUI介绍Ollama 是一个运行大模型的工具,可以看成是大模型领域的Docker,可以下载所需的大模型并暴露API。OpenWebUI 是一个大模型的WebUI交互工具,支持Ollama,即调用Ollama暴露的API实现与大模型交互:部署......
  • 安卓MT管理器v2.16.2/逆向修改神器 本地VIP已解锁
    MT管理器是一款强大的文件管理工具和APK逆向修改神器。如果你喜欢它的双窗口操作风格,可以单纯地把它当成文件管理器使用。如果你对修改APK有深厚的兴趣,那么你可以用它做许许多多的事,例如汉化应用、替换资源、修改布局、修改逻辑代码、资源混淆、去除签名校验等,主要取决于你如......
  • K8S POD控制器:从基础到高级实战技巧
    一、引言在当今的云计算时代,Kubernetes(K8s)已成为最受欢迎的容器编排工具,它的核心组成部分之一——K8sPOD控制器,扮演着至关重要的角色。这篇文章旨在深入探讨K8sPOD控制器的内部工作原理、不同类型及其应用场景,并提供从基础到高级的实战技巧,以满足专业从业者对深度技术知识......
  • kubectl cp
    简介将文件、目录复制到容器;或从容器复制文件、目录。kubectlcp<file-spec-src><file-spec-dest>示例#!!!重要提示!!!#要求你的容器镜像中存在'tar'可执行文件#如果'tar'不存在,'kubectlcp'将会失败##对于符号链接、通配符扩展或文件模式保留......
  • 安卓MT管理器v2.16.2/逆向修改神器 本地VIP已解锁
    MT管理器是一款强大的文件管理工具和APK逆向修改神器。如果你喜欢它的双窗口操作风格,可以单纯地把它当成文件管理器使用。如果你对修改APK有深厚的兴趣,那么你可以用它做许许多多的事,例如汉化应用、替换资源、修改布局、修改逻辑代码、资源混淆、去除签名校验等,主要取决于你如......
  • k8s备份恢复实践--velero
    k8s备份恢复实践--velero使用Velero备份k8资源到minio,阿里云oss,七牛云Kodo环境linux+kubectl#1.velero简介Velero是用于Kubernetes资源备份、恢复、迁移的开源工具客户端(velero命令行)->服务端(部署在k8s)->对象储存(s3或兼容s3储存)velero将k8s资源备份为json......
  • K8S 中的 CRI、OCI、CRI shim、containerd
    哈喽大家好,我是咸鱼。好久没发文了,最近这段时间都在学K8S。不知道大家是不是和咸鱼一样,刚开始学K8S、Docker的时候,往往被CRI、OCI、CRIshim、containerd这些名词搞得晕乎乎的,不清楚它们到底是干什么用的。所以今天,咸鱼打算借这篇文章来解释一下这些名词,帮助大家理清它们的......
  • 使用K8S部署的禅道怎么修改不使用容器自带的数据库而使用其他数据库
    使用K8S部署禅道参考https://www.cnblogs.com/minseo/p/17870641.html如果想要使用不使用容器内自带的数据库修改配置文件找到pvc原始文件位置修改配置文件修改以下配置文件#zentao/config/my.php修改数据库的地址,设置用户名和密码<?php$config->installed=......
  • k8s怎么配置secret呢?
    在Kubernetes中,配置Secret主要涉及到创建、查看和使用Secret的过程。以下是配置Secret的详细步骤和相关信息: ###1.Secret的概念 *Secret是Kubernetes用来保存密码、token、密钥等敏感数据的资源对象。*这些敏感数据可以存放在Pod或镜像中,但放在Secret中可以更方便地......
  • 【K8s】专题七(2):Kubernetes 服务发现之 Ingress
    以下内容均来自个人笔记并重新梳理,如有错误欢迎指正!如果对您有帮助,烦请点赞、关注、转发!欢迎扫码关注个人公众号!目录一、基本介绍二、工作原理三、资源清单(示例)1、IngressController2、Ingress对象四、常用命令一、基本介绍Ingress是Kubernetes提供的一种服务......