首页 > 其他分享 >Kubernetes快速部署(v1.31.4)

Kubernetes快速部署(v1.31.4)

时间:2024-12-28 10:55:28浏览次数:9  
标签:control Kubernetes kubernetes etc 部署 kubelet certs v1.31 kube

文章目录

1、初始化

将系统升级到最新,可以使用阿里源镜像站,本教程使用CentOS7系统(考虑大量用户使用的版本)

yum -y update

关闭selinux和防火墙

systemctl disable --now firewalld
setenforce 0
sed -i '/SELINUX=/s@enforcing@disabled@g' /etc/selinux/config

重启时验证状态

sestatus
sudo systemctl status firewalld

临时禁用交换分区(重启后失效)

sudo swapoff -a

永久禁用:在/etc/fstab文件中注释掉下面这行

......
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

然后就是固定主机IP地址(略),设置主机名和地址映射,这里以一个master、一个worker为例

# master
hostnamectl set-hostname master
# worker
hostnamectl set-hostname worker

/etc/hosts文件中添加如下内容

192.168.150.113 master
192.168.150.123 worker

如果使用的是虚拟机最好不要拷贝,因为需要保证节点之中不可以有重复的主机名、MAC 地址或 product_uuid,可以使用下面的方式对它们进行验证,还保证各个节点之间时钟同步

ip link
sudo cat /sys/class/dmi/id/product_uuid

安装容器运行时,这里以docker为例

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker
yum -y install docker-ce

配置cgroup

cat << EOF | tee /etc/docker/daemon.json
{
    "data-root": "/data/docker",
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重载配置
systemctl daemon-reload
# 启动
systemctl enable --now docker containerd
systemctl restart docker

安装cri-docker

# 下载
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el7.x86_64.rpm
# 安装
yum -y install ./cri-dockerd-0.3.14-3.el7.x86_64.rpm

配置cri-docker

cat << EOF | tee /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

cat << EOF | tee /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify

ExecStart=/usr/bin/cri-dockerd \
          --container-runtime-endpoint=unix:///var/run/cri-docker.sock \
          --network-plugin=cni \
          --cni-bin-dir=/opt/cni/bin \
          --cni-conf-dir=/etc/cni/net.d \
          --image-pull-progress-deadline=30s \
          --pod-infra-container-image=registry.k8s.io/pause:3.9 \
          --docker-endpoint=unix:///run/docker.sock \
          --cri-dockerd-root-directory=/data/docker
ExecReload=/bin/kill -s HUP \$MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

EOF

# 重载配置
systemctl daemon-reload
# 启动
systemctl enable --now cri-docker

开启IP转发、swap优化等

# 开启IP转发 swap优化等
cat << EOF | tee /etc/sysctl.d/k8s.conf
vm.swappiness = 0
vm.panic_on_oom = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
EOF
# 配置生效
sysctl -p /etc/sysctl.d/k8s.conf

加载br_netfilter模块

[root@localhost ~]# modprobe br_netfilter
[root@localhost ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

安装ipset及ipvsadm

yum -y install ipset ipvsadm

配置ipvsadm模块加载方式,添加需要加载的模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

授权、运行、检查是否加载

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

2、安装 kubeadm

设置k8s的yum

# 此操作会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 配置
cat << EOF | tee /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
sudo systemctl enable --now kubelet

3、单节点初始化

初始化前最好先进行reboot重启系统,注意网段别重复

kubeadm init --cri-socket unix:///var/run/cri-docker.sock --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.150.113

执行结果

I1221 23:12:32.357095   10451 version.go:261] remote version is much newer: v1.32.0; falling back to: stable-1.31
[init] Using Kubernetes version: v1.31.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1221 23:12:33.302092   10451 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.150.113]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.150.113 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.150.113 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.857711ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 11.501700052s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: w4pf01.vl800yhthhjoa8cn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.150.113:6443 --token w4pf01.vl800yhthhjoa8cn \
        --discovery-token-ca-cert-hash sha256:1dfd090ad4d32548796e1b22db065e960943b22cf94be8e05a5126dfbee8a1f8 

按照输出结果给出的提示继续进行,主要是让其他用户也能使用

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
# 删除污点(根据需求,这里删除是为了在master上跑任务)
kubectl taint node --all node-role.kubernetes.io/control-plane-

初始化出现异常使用如下命令

rm -rf /etc/kubernetes /var/lib/etcd /var/lib/kubelet /var/lib/cni /run/kubernetes

4、集群网络环境搭建

在一个新的工作目录下

mkdir calicodir
cd calicodir/

下载operator资源清单文件

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml

应用资源清单文件,创建operator,注意使用apply会报错

kubectl create -f tigera-operator.yaml

通过自定义资源方式安装

wget https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml

修改文件第13行,修改为使用kubeadm init ----pod-network-cidr对应的IP地址段

vim custom-resources.yaml
......
10    ipPools:
11    - name: default-ipv4-ippool
12      blockSize: 26
13      cidr: 10.244.0.0/16
14      encapsulation: VXLANCrossSubnet
......

应用资源清单文件

kubectl create -f custom-resources.yaml

查看是否成功

kubectl get ns

监视calico-sysem命名空间中pod运行情况

watch kubectl get pods -n calico-system

已经全部运行

kubectl get pods -n calico-system

成功结果样例

NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-f9484d6cf-d6m9k   1/1     Running   0          5m53s
calico-node-hjt9d                         1/1     Running   0          5m53s
calico-typha-6ddb5b7f4c-qdsmj             1/1     Running   0          5m53s
csi-node-driver-z9cnh                     2/2     Running   0          5m53s

5、安装和配置calicoctl

下载二进制文件

curl -L https://github.com/projectcalico/calico/releases/download/v3.29.1/calicoctl-linux-amd64 -o calicoctl

将文件设置为可执行

chmod +x ./calicoctl

安装calicoctl

mv calicoctl /usr/bin/

查看添加权限后文件

ls /usr/bin/calicoctl

查看calicoctl版本

calicoctl  version

结果样例

Client Version:    v3.29.1
Git commit:        ddfc3b1ea
Cluster Version:   v3.29.1
Cluster Type:      typha,kdd,k8s,operator,bgp,kubeadm

通过~/.kube/config连接kubernetes集群,查看已运行节点,输出master表示正确

DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes --allow-version-mismatch

6、集群工作节点添加

直接将创建时的提示命令输入即可,执行前最好也使用reboot命令重启一次系统

kubeadm join 192.168.150.113:6443 --token w4pf01.vl800yhthhjoa8cn --discovery-token-ca-cert-hash sha256:1dfd090ad4d32548796e1b22db065e960943b22cf94be8e05a5126dfbee8a1f8 --cri-socket unix:///var/run/cri-docker.sock

如下输出代表成功加入k8s集群

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 507.931252ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看节点信息

[root@master calicodir]# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master    Ready    control-plane   72m   v1.31.4
worker    Ready    <none>          37m   v1.31.4

加入节点如果出现错误则使用以下命令

rm -rf /etc/kubernetes /var/lib/etcd /var/lib/kubelet /var/lib/cni /run/kubernetesip

如果由于意外没有保留创建时的提示信息,可以重新生成加入节点的命令

kubeadm token create --print-join-command

7、补充

为了防止小伙伴们不清楚哪些是主节点该做的操作哪些是工作节点的操作,这里进行个补充:

节点对应的操作章节
master1、2、3、4、5
worker1、2、6

另外,网速不好的小伙伴可以使用阿里镜像,操作指导:阿里云Kubernetes镜像官网,不过目前仅支持 v1.24 - v1.29 版本,可能后续会更新。


标签:control,Kubernetes,kubernetes,etc,部署,kubelet,certs,v1.31,kube
From: https://blog.csdn.net/m0_62943934/article/details/144638539

相关文章

  • Kubeadm快速部署k8s集群2
    8、新增一个node节点按照上面的方式全新安装,在加入集群过程稍有不同。[root@k8smaster~]#kubeadmtokencreate--print-join-command  #因为token过期时间是24h,所以需要重新生成W122809:54:57.7837346538configset.go:202]WARNING:kubeadmcannotvalidatecomponent......
  • 【安全运维】如何安全部署和升级服务?
    在服务升级中,采用安全和可控的策略是关键,以最小化停机时间、降低风险并确保平稳过渡。一、多服务部署该策略同时为多个服务部署新的变更。这种方法很容易实现。但由于所有服务都是同时升级的,因此很难管理和测试依赖关系。也很难安全地回滚。二、蓝绿部署蓝绿部署专注于运行两......
  • 基于大数据 Python 抖音数据分析可视化系统(源码+LW+部署讲解+数据库+ppt)
    !!!!!!!!!很对人不知道选题怎么选不清楚自己适合做哪块内容都可以免费来问我避免后期給自己答辩找麻烦增加难度(部分学校只有一次答辩机会没弄好就延迟毕业了)会持续一直更新下去有问必答一键收藏关注不迷路源码获取:https://pan.baidu.com/s/1aRpOv3f2sdtVYOogQjb8jg?pwd=jf1d......
  • 基于java的SpringBoot/SSM+Vue+uniapp的员工日志管理信息系统的详细设计和实现(源码+l
    文章目录前言详细视频演示具体实现截图技术栈后端框架SpringBoot前端框架Vue持久层框架MyBaitsPlus系统测试系统测试目的系统功能测试系统测试结论为什么选择我代码参考数据库参考源码获取前言......
  • ssm停车场管理系统8zk28(程序+源码+数据库+调试部署+开发环境)
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表开题报告内容一、研究背景与意义随着城市化进程的加速和汽车保有量的不断增加,停车场管理面临着越来越大的挑战。传统的停车场管理方式存在效率低下、资源浪费、......
  • ssm蔬菜水果销售网站1y6qd--(程序+源码+数据库+调试部署+开发环境)
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表开题报告内容一、项目背景与意义随着健康饮食观念的普及和互联网技术的快速发展,线上购买蔬菜水果已成为消费者的新选择。然而,当前市场上蔬菜水果销售网站众多,但......
  • ssm社区再就业管理信息系统z6zw3(程序+源码+数据库+调试部署+开发环境)
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表开题报告内容一、课题背景与意义随着社会对就业问题的日益关注,社区再就业管理信息系统的建设变得尤为重要。该系统旨在提高再就业管理的效率,为失业人员提供更便......
  • ssm体检中心管理系统vahdr(程序+源码+数据库+调试部署+开发环境)
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表开题报告内容一、项目背景随着人们生活水平的提高和健康意识的增强,体检已成为现代人关注健康的重要方式。然而,传统的体检中心管理方式存在诸多不足,如流程繁琐、......
  • ssm实验室设备管理系统sg01u(程序+源码+数据库+调试部署+开发环境)
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表开题报告内容一、项目背景与意义随着科技的不断进步,实验室在科研、教学等领域的作用日益凸显。然而,传统的实验室设备管理方式存在诸多不足,如设备信息记录不完整......
  • 【Container App】部署Contianer App 遇见 Failed to deploy new revision: The Ingre
    问题描述在部署ContianerApp时候,遇见Failedtodeploynewrevision:TheIngress'sTargetPortorExposedPortmustbespecifiedforTCPapps. 回到ContainerApp的门户,然后修改操作都会触发报错。均提示 TheIngress'sTargetPortorExposedPortmustbespecifiedfor......