首页 > 其他分享 >部署Kubernetes 1.25.4初始ipvs模式

部署Kubernetes 1.25.4初始ipvs模式

时间:2022-11-13 18:32:38浏览次数:72  
标签:cri Kubernetes -- com master01 ipvs k8s root 1.25


1、环境准备

主机名

IP地址

系统版本

k8s-master01 k8s-master01.wang.org​ kubeapi.wang.org kubeapi

192.168.100.201

Ubuntu2004

k8s-master02 k8s-master02.wang.org

192.168.100.202

Ubuntu2004

k8s-master03 k8s-master03.wang.org

192.168.100.203

Ubuntu2004

k8s-node01 k8s-node01.wang.org

192.168.100.204

Ubuntu2004

k8s-node02 k8s-node02.wang.org

192.168.100.205

Ubuntu2004

k8s-node03 k8s-node03.wang.org

192.168.100.206

Ubuntu2004

1-1、设置主机名
#所有节点执行:
[root@ubuntu2004 ~]#hostnamectl set-hostname k8s-master01
1-2、关闭防火墙
#所有节点执行:
[root@k8s-master01 ~]# ufw disable
[root@k8s-master01 ~]# ufw status
1-3、时间同步
#所有节点执行:
[root@k8s-master01 ~]# apt install -y chrony
[root@k8s-master01 ~]# systemctl restart chrony
[root@k8s-master01 ~]# systemctl status chrony
[root@k8s-master01 ~]# chronyc sources
1-4、主机名互相解析
#所有节点执行:
[root@k8s-master01 ~]#vim /etc/hosts
192.168.100.201 k8s-master01 k8s-master01.wang.org kubeapi.wang.org kubeapi
192.168.100.202 k8s-master02 k8s-master02.wang.org
192.168.100.203 k8s-master03 k8s-master03.wang.org
192.168.100.204 k8s-node01 k8s-node01.wang.org
192.168.100.205 k8s-node02 k8s-node02.wang.org
192.168.100.206 k8s-node03 k8s-node03.wang.org
1-5、禁用swap
#所有节点执行:
[root@k8s-master01 ~]# sed -r -i '/\/swap/s@^@#@' /etc/fstab
[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# systemctl --type swap

#若不禁用Swap设备,需要在后续编辑kubelet的配置文件/etc/default/kubelet,设置其忽略Swap启用的状态错误,内容:KUBELET_EXTRA_ARGS="--fail-swap-on=false"
2、安装docker
#所有节点执行:

#安装必要的一些系统工具
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt -y install apt-transport-https ca-certificates curl software-properties-common

#安装GPG证书
[root@k8s-master01 ~]# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
OK
#写入软件源信息
[root@k8s-master01 ~]# add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

#更新并安装Docker-CE
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y docker-ce
#所有节点执行:
kubelet需要让docker容器引擎使用systemd作为CGroup的驱动,其默认值为cgroupfs,因而,我们还需要编辑docker的配置文件/etc/docker/daemon.json,添加如下内容,其中的registry-mirrors用于指明使用的镜像加速服务。
[root@k8s-master01 ~]#mkdir /etc/docker
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "200m"
},
"storage-driver": "overlay2"
}

[root@k8s-master01 ~]#systemctl daemon-reload && systemctl enable --now docker && docker version
Client: Docker Engine - Community
Version: 20.10.21
#注:kubeadm部署Kubernetes集群的过程中,默认使用Google的Registry服务k8s.gcr.io上的镜像,由于2022年仓库已经改为registry.k8s.io,国内可以直接访问,所以现在不需要镜像加速或者绿色上网就可以拉镜像了,如果使用国内镜像请参考https://blog.51cto.com/dayu/5811307
3、安装cri-dockerd
#所有节点执行:
#下载地址:https://github.com/Mirantis/cri-dockerd
[root@k8s-master01 ~]# apt install ./cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb -y

#完成安装后,相应的服务cri-dockerd.service便会自动启动
[root@k8s-master01 ~]#systemctl restart cri-docker.service && systemctl status cri-docker.service
4、安装kubeadm、kubelet和kubectl
#所有节点执行:
#在各主机上生成kubelet和kubeadm等相关程序包的仓库,可参考阿里云官网
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y apt-transport-https curl
[root@k8s-master01 ~]# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
[root@k8s-master01 ~]#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF


#更新仓库并安装
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y kubelet kubeadm kubectl

#注意:先不要启动,只是设置开机自启动
[root@k8s-master01 ~]# systemctl enable kubelet

#确定kubeadm等程序文件的版本
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:35:06Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
5、整合kubelet和cri-dockerd
5-1、配置cri-dockerd
#所有节点执行:

[root@k8s-master01 ~]# vim /usr/lib/systemd/system/cri-docker.service

#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8 --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d



#说明:
需要添加的各配置参数(各参数的值要与系统部署的CNI插件的实际路径相对应):
--network-plugin:指定网络插件规范的类型,这里要使用CNI;
--cni-bin-dir:指定CNI插件二进制程序文件的搜索目录;
--cni-cache-dir:CNI插件使用的缓存目录;
--cni-conf-dir:CNI插件加载配置文件的目录;
配置完成后,重载并重启cri-docker.service服务。

[root@k8s-master01 ~]# systemctl daemon-reload && systemctl restart cri-docker.service
[root@k8s-master01 ~]# systemctl status cri-docker
5-2、配置kubelet
#所有节点执行:

#配置kubelet,为其指定cri-dockerd在本地打开的Unix Sock文件的路径,该路径一般默认为“/run/cri-dockerd.sock“
[root@k8s-master01 ~]# mkdir /etc/sysconfig
[root@k8s-master01 ~]# vim /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
[root@k8s-master01 ~]# cat /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"

#说明:该配置也可不进行,而是直接在后面的各kubeadm命令上使用“--cri-socket unix:///run/cri-dockerd.sock”选项
6、初始化第一个主节点
#第一个主节点执行:

#列出k8s所需要的镜像
[root@k8s-master01 ~]#kubeadm config images list
registry.k8s.io/kube-apiserver:v1.25.4
registry.k8s.io/kube-controller-manager:v1.25.4
registry.k8s.io/kube-scheduler:v1.25.4
registry.k8s.io/kube-proxy:v1.25.4
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.5-0
registry.k8s.io/coredns/coredns:v1.9.3


#使用阿里云拉取所需镜像
[root@k8s-master01 ~]#kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --cri-socket unix:///run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.25.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.8
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.5-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

=======================
#kubeadm可通过配置文件加载配置,以定制更丰富的部署选项。获取内置的初始配置文件的命令
kubeadm config print init-defaults

=======================
[root@k8s-master01 ~]#vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
kind: InitConfiguration
localAPIEndpoint:
# 这里的地址即为初始化的控制平面第一个节点的IP地址;
advertiseAddress: 192.168.100.201
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
# 第一个控制平面节点的主机名称;
name: k8s-master01.wang.com
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
# 控制平面的接入端点,我们这里选择适配到kubeapi.wang.com这一域名上;
controlPlaneEndpoint: "kubeapi.wang.com:6443"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.25.4
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# 用于配置kube-proxy上为Service指定的代理模式,默认为iptables;
mode: "ipvs"

[root@k8s-master01 ~]#kubeadm init --config kubeadm-config.yaml --upload-certs


#如提示以下信息,代表初始化完成,请记录信息,以便后续使用:
.....

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join kubeapi.wang.org:6443 --token 3eam3e.wt9g2toztrpse7v2 \
--discovery-token-ca-cert-hash sha256:ed3b9c0d449e970d889db6c4bf19a377f483aeae4c4ba34cd7d59ebc1cbe9e81 \
--control-plane --certificate-key c6c74f40c7fef168b5026f5168fe8ce20a01c9adee46733d289fb2e40b1c360a

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kubeapi.wang.org:6443 --token 3eam3e.wt9g2toztrpse7v2 \
--discovery-token-ca-cert-hash sha256:ed3b9c0d449e970d889db6c4bf19a377f483aeae4c4ba34cd7d59ebc1cbe9e81

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#如果初始化报如下错误:
Error getting node" err="node \"k8s-master01\" not found

#1、在cri-docker.service文件指定下pause版本:
[root@k8s-master01 ~]# vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8 --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d
#2、重启服务:
systemctl daemon-reload
systemctl restart cri-docker.service

#3、重置集群:
kubeadm reset --cri-socket unix:///run/cri-dockerd.sock && rm -rf /etc/kubernetes/ /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni /etc/cni/net.d
7、其他节点加入集群
#k8s-master02、k8s-master03执行:

#k8s-master02、k8s-master03加入集群:
[root@k8s-master02 ~]# kubeadm join kubeapi.wang.org:6443 --token 3eam3e.wt9g2toztrpse7v2 --discovery-token-ca-cert-hash sha256:ed3b9c0d449e970d889db6c4bf19a377f483aeae4c4ba34cd7d59ebc1cbe9e81 --control-plane --certificate-key c6c74f40c7fef168b5026f5168fe8ce20a01c9adee46733d289fb2e40b1c360a --cri-socket unix:///run/cri-dockerd.sock



#k8s-node01、k8s-node02、k8s-node03执行:
#k8s-node01、k8s-node02、k8s-node03加入集群:
[root@k8s-node01 ~]#kubeadm join kubeapi.wang.org:6443 --token 3eam3e.wt9g2toztrpse7v2 --discovery-token-ca-cert-hash sha256:ed3b9c0d449e970d889db6c4bf19a377f483aeae4c4ba34cd7d59ebc1cbe9e81 --cri-socket unix:///run/cri-dockerd.sock
8、部署calico
#第一个主节点执行:
[root@k8s-master01 ~]#apt install zip unzip -y
[root@k8s-master01 ~]#unzip calico-3.24.4.zip
[root@k8s-master01 ~]#cd calico-3.24.4/manifests/

[root@k8s-master01 manifests]#kubectl apply -f calico.yaml

[root@k8s-master01 manifests]#kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01.wang.org Ready control-plane 23m v1.25.4
k8s-master02 Ready control-plane 17m v1.25.4
k8s-master03 Ready control-plane 18m v1.25.4
k8s-node01 Ready <none> 15m v1.25.4
k8s-node02 Ready <none> 15m v1.25.4
k8s-node03 Ready <none> 15m v1.25.4

至此,使用kubeadm部署集群完毕!

参考链接:​​https://mp.weixin.qq.com/s?__biz=MzU3OTcyNzgzNA==&mid=2247483761&idx=1&sn=514470876f051b8fc0d37177179fed0e&chksm=fd60f9c4ca1770d2db65d54cd4505852f48da652ba97b0059bd3053433b5939a9bd0d2688ed5&mpshare=1&scene=1&srcid=1108PK1XLihIoaIMH5uBhJ1w&sharer_sharetime=1667865977326&sharer_shareid=7b9c9485b068a04a0d3d18a077f89c65#rd​

标签:cri,Kubernetes,--,com,master01,ipvs,k8s,root,1.25
From: https://blog.51cto.com/dayu/5847893

相关文章

  • 云原生之旅 - 11)基于 Kubernetes 动态伸缩 Jenkins Build Agents
    前言上一篇文章 云原生之旅-10)手把手教你安装JenkinsonKubernetes 我们介绍了在Kubernetes上安装Jenkins,本文介绍下如何设置k8spod作为Jenkins构建job的agen......
  • kubernetes-001
    1、介绍kubernetes(简称K8S)是一个以“应用”为中心,管理容器生命周期,容器之间关系,集群资源调度的容器编排工具,是一个面向平台的平台。为什么要简称K8S呢? 1、字母k和字母s中间......
  • Kubernetes-1.25 Container Image Download
    一、Kubernetes-1.25ContainerImageDownload1kube-apiserver#sourceregistry.k8s.io/kube-apiserver:v1.25.2#tagdockerpullswr.cn-north-1.myhuaweicloud.co......
  • 云原生之旅 - 10)手把手教你安装 Jenkins on Kubernetes
    前言谈到持续集成工具就离不开众所周知的Jenkins,本文带你了解如何在Kubernetes上安装Jenkins,后续文章会带你深入了解如何使用k8spod作为Jenkins的buildagents。 ......
  • 198 - Docker+Kubernetes(k8s)微服务容器化实践
                生成md5代码  生成token代码 ......
  • Kubernetes应用1
    1、传递环境变量创建mysql[root@master-101~]#kubectlcreateserviceclusteripmydb--tcp=3306:3306--dry-run=client-oyamlapiVersion:v1kind:Servicemetadata:......
  • Kubernetes服务发现-Service
    1、Service概念Service是一组pod服务的抽象,相当于一组pod的LoadBalance,负责将请求分发给对应的pod。一组Pod可以被Service访问到,通过LabelSelector。缺点Ser......
  • 在 Kubernetes 集群上部署 Dapr
    在Kubernetes集群上部署Dapr在本地机器上运行以下命令以在集群上初始化Dapr:daprinit--kubernetes验证安装结果:daprstatus-k ......
  • 使用 Kubeadm 部署 Kubernetes(K8S) 安装 -- Ingress-Ngnix
    前置条件:使用Kubeadm部署Kubernetes(K8S)安装安装ingress-nginx组件(在master节点执行)通过ip+port号进行访问,使用Service里的NodePort实现,把端口对外暴露缺陷......
  • k8s iptables 改造ipvs
    1.修改iptables为ipvs模式 ipvs采用的hash表,iptables采用一条条的规则列表。集群数量越多iptables规则就越多,而iptables规则是从上到下匹配,所以效率就越是低下......