首页 > 其他分享 >kubeadm安装部署K8S-1.30.2

kubeadm安装部署K8S-1.30.2

时间:2024-07-29 17:42:23浏览次数:9  
标签:kube name -- etc 1.30 kubeadm K8S docker flannel

1、环境准备

1.1、节点规划

#CentOS Linux release 7.9.2009 (Core) 
master01        10.202.30.22	# 4C8G
node01          10.202.30.30	# 4C8G
node02          10.202.30.31	# 4C8G

1.2、配置hosts主机名解析

#vim /etc/hosts
10.202.30.22    master01
10.202.30.30    node01
10.202.30.31    node02

1.3、配置各节点之间免密通信

# 生成密钥 (直接回车) (所有节点)
ssh-keygen -t rsa -q -N ''

# 分发公钥到其他节点  (所有节点)
for i in master01 node01 node02;do ssh-copy-id -i ~/.ssh/id_rsa.pub $i;done

# 分发 hosts 文件到其他节点
for i in node01 node02;do scp /etc/hosts root@$i:/etc/;done

1.4、配置时间同步

# 安装 chrony
yum install chrony -y

# 设置开机自启
systemctl start chronyd.service
systemctl enable chronyd.service

# 检查 chrony 状态
chronyc sources

# 验证时间同步
date

1.5、关闭防火墙和selinux

# 关闭selinux
## 临时关闭
setenforce 0

## 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config

# 关闭防火墙
systemctl disable --now firewalld
systemctl stop firewalld
systemctl disable firewalld

1.6、关闭swap分区和NetworkMannger

# 关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab 
swapoff -a 

# 关闭NetworkManager 
systemctl stop NetworkManager 
systemctl disable NetworkManager

1.7、配置centos基础源和epel源

# 配置centos7阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

# 配置epel阿里源
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo

1.8、配置k8s阿里源

# 配置k8s阿里源(全部节点)(旧版最高支持到1.28))
cat >>/etc/yum.repos.d/kubernetes.repo<< EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

## 选其一即可

# 配置k8s阿里源(全部节点)(新版)
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF

## ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 
## yum install -y --nogpgcheck kubelet kubeadm kubectl 安装


## 将k8s源分发到其他节点
for i in node01 node02;do scp /etc/yum.repos.d/kubernetes.repo root@$i:/etc/yum.repos.d/;done

1.9、内核升级

1.9.1、下载对应的RPM包
# https://www.elrepo.org 目前不支持RHEL- 7,支持RHEL- 8和RHEL- 9
# 使用rpm包升级内核,rpm包下载地址
http://mirrors.coreix.net/elrepo-archive-archive/kernel/el7/x86_64/RPMS/
# centos内核升级需要下载三个rpm包(下载对应版本的包)
kernel
kernel-devel
kernel-headers
1.9.2、安装内核包
# 安装内核包 (kernel)
rpm -ivh kernel-lt-5.4.278-1.el7.elrepo.x86_64.rpm

# 安装内核开发包 (kernel-devel)
rpm -ivh kernel-lt-devel-5.4.278-1.el7.elrepo.x86_64.rpm

# 安装内核头文件包 (kernel-headers)
rpm -ivh kernel-lt-headers-5.4.278-1.el7.elrepo.x86_64.rpm
1.9.3、验证内核
# 查看默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg

CentOS Linux (5.4.278-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-a66a00f1a66a00f1a66a00f1a66a00f1) 7 (Core)

# 默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而5.4.278的是在1),所以需要选择0
grub2-set-default 0  

# 重启系统
reboot

# 验证内核版本
uname -r
# 5.4.278-1.el7.elrepo.x86_64

2、安装配置IPVS和docker

2.1、安装IPVS

# 安装 IPVS
yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

# 加载 IPVS 模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr
ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

# 增加k8s转发配置并使其生效。(所有节点)
## /etc/sysctl.d/k8s.conf
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

# 立即生效
sysctl --system

2.2、安装docker

# 安装阿里源docker-ce
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce

# Step 4: 开启Docker服务
systemctl start docker && systemctl enable docker && sudo systemctl status docker

# Step 5: 配置阿里镜像加速(登录阿里云->容器镜像服务->镜像工具)
# 需要再添加 "exec-opts": ["native.cgroupdriver=systemd"]
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://d6mtathr.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 重启docker服务
sudo systemctl daemon-reload && sudo systemctl restart docker && sudo systemctl status docker

# 修改containerd配置(所有节点)
# #备份源文件
cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
containerd config default > /etc/containerd/config.toml

# vim /etc/containerd/config.toml
1、找到SystemdCgroup = false这一行,将false改为true。
2、找到包含sandbox_image这一行,将地址改为 registry.cn-guangzhou.aliyuncs.com/my_aliyund/pause:v3.9

# 修改后重启containerd
sudo systemctl restart containerd && sudo systemctl status containerd && sudo systemctl enable containerd

3、安装k8s

3.1、kubeadm初始化

# 所有节点安装:
yum install -y kubelet-1.30.2 kubeadm-1.30.2 kubectl-1.30.2

# 所有节点设置kubelet开机自启:
systemctl enable kubelet.service

# 打印初始化参数:
kubeadm config print init-defaults

# 打印集群安装所需的镜像以及版本:(k8s官方镜像不好拉取,需要提前下载所需的镜像版本)
kubeadm config images list

# 在master01节点执行
# 拉取k8s所需镜像到本地
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/kube-apiserver:v1.30.2
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/kube-controller-manager:v1.30.2
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/kube-scheduler:v1.30.2
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/kube-proxy:v1.30.2
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/pause:3.9
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/etcd:3.5.12-0
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/coredns:v1.11.1

# 可以重新打tag,或者替换镜像源(下面为替换镜像源)
# kubeadm初始化
kubeadm init \
--apiserver-advertise-address=10.202.30.22 \
--control-plane-endpoint=master01 \
--image-repository registry.cn-guangzhou.aliyuncs.com/my_aliyund \
--kubernetes-version v1.30.2  \
--service-cidr=10.10.0.0/12  \
--pod-network-cidr=10.254.0.0/16

# 生成相关token,(用于增加节点)
kubeadm join master01:6443 --token m9dhz2.u0annuoi45g4azer \
    --discovery-token-ca-cert-hash sha256:7f98d15fa8a053931dec6e062e97e00f3cdb66c77bd041ca429ce089f0fc8cac

# 如果忘记保存 这个命令可以重新获取
kubeadm token create --print-join-command

3.2、kubeadm init 遇到的错误

[root@master01 ~]# kubeadm init \
> --apiserver-advertise-address=10.202.99.128 \
> --control-plane-endpoint=master01 \
> --image-repository registry.cn-guangzhou.aliyuncs.com/my_aliyund \
> --kubernetes-version v1.30.2  \
> --service-cidr=10.10.0.0/12  \
> --pod-network-cidr=10.254.0.0/16
[init] Using Kubernetes version: v1.30.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2024-07-18T03:14:29-04:00" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#####################

# 解决方法
# vim /etc/containerd/config.toml
# SystemdCgroup 参数配置为 true
SystemdCgroup = true
# 所有的 runtime_type 参数配置为 io.containerd.runtime.v1.linux
runtime_type = "io.containerd.runtime.v1.linux"
# 重启 containerd
systemctl restart containerd

## 解决初始化containerd错误的文章
https://zhuanlan.zhihu.com/p/618551600

2.3、初始化成功信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join master01:6443 --token rdn0pu.r6kxla7vzf4bcftt \
        --discovery-token-ca-cert-hash sha256:611744ae7304f5c18cf46ca3bba42d5b6f5aa671173249fa1a17088ab37308ee \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master01:6443 --token rdn0pu.r6kxla7vzf4bcftt \
        --discovery-token-ca-cert-hash sha256:611744ae7304f5c18cf46ca3bba42d5b6f5aa671173249fa1a17088ab37308ee

3.4、加入节点

# 如发生错误
# E0401 02:25:24.207258    6245 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

# 临时解决方案
export KUBECONFIG=/etc/kubernetes/admin.conf

# 长期解决方案
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config

# 到需要加入集群的节点执行(初始化成功信息里获取)
# 加入master节点
  kubeadm join master01:6443 --token rdn0pu.r6kxla7vzf4bcftt \
        --discovery-token-ca-cert-hash sha256:611744ae7304f5c18cf46ca3bba42d5b6f5aa671173249fa1a17088ab37308ee \
        --control-plane
# 加入node节点
kubeadm join master01:6443 --token rdn0pu.r6kxla7vzf4bcftt \
        --discovery-token-ca-cert-hash sha256:611744ae7304f5c18cf46ca3bba42d5b6f5aa671173249fa1a17088ab37308ee

3.5、安装网络插件(flannel)

# # 查看集群状态
kubectl get nodes

# 状态为 notready
# 检查(coredns是启动不了的状态,导致节点不可用,原因:因为网络插件没有安装)
# 拉取flannel镜像
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/flannel-cni-plugin:v1.1.2
docker pull registry.cn-guangzhou.aliyuncs.com/my_aliyund/flannel:v0.21.5
kubectl apply -f /data/kube-flannel.yaml

3.5.1、flannel.yaml

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: registry.cn-guangzhou.aliyuncs.com/my_aliyund/flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: registry.cn-guangzhou.aliyuncs.com/my_aliyund/flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: registry.cn-guangzhou.aliyuncs.com/my_aliyund/flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

3.6、配置bash-completion 命令补全

yum install -y bash-completion

source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

3.7、查看集群状态

# 查看集群状态
kubectl cluster-info

# 查看pod状态
kubectl get pod -A

标签:kube,name,--,etc,1.30,kubeadm,K8S,docker,flannel
From: https://www.cnblogs.com/hy1212/p/18330520

相关文章

  • k8s修改pod的内核参数以优化服务网络性能
    k8s修改pod的内核参数以优化服务网络性能1、面对高并发场景:TIME_WAIT连接复用如果短连接并发量较高,它所在netns中TIME_WAIT状态的连接就比较多,而TIME_WAIT连接默认要等2MSL时长才释放,长时间占用源端口,当这种状态连接数量累积到超过一定量之后可能会导致无法新建连接。所......
  • 【K8s】专题七(4):Kubernetes 服务发现之 Ingress 进阶
    以下内容均来自个人笔记并重新梳理,如有错误欢迎指正!如果对您有帮助,烦请点赞、关注、转发!欢迎扫码关注个人公众号!目录一、官方文档二、Ingress进阶使用(示例)1、Ingress实现重定向2、Ingress实现路由跳转3、Ingress实现自定义配置4、Ingress实现CORS5、Ingress实......
  • 基于rancher部署k8s
    一、基础环境说明节点名 节点ip 角色 操作系统node1 10.42.8.13 control-plane,etcd,master CentOS7.9node2 10.42.8.14 control-plane,etcd,master CentOS7.9node3 10.42.8.15 control-plane,etcd,master CentOS7.9二、k8s节点机基础环境设置1、设置hostname(三台节点分别......
  • 2、K8S集群监控
    目录一、KubeStateMetrics简介二、集群组件2.0、部署KubeStateMetrics2.0.1部署RBAC2.0.2部署deploy、svc2.0.3确认验证null2.1、集群组件监控2.1.1、kube-apiserver更新配置导入grafana模板2.1.2、controller-manager2.1.3、scheduler2.1.4、kube-state-metrics2.1.5、coredn......
  • Docker 和 k8s 学习
    披个甲:偷的图灵学院的笔记docker:https://note.youdao.com/ynoteshare/index.html?id=db5365c679b7d9129cbcfab5cb682d69&type=note&_time=1722071596141k8s:https://note.youdao.com/ynoteshare/index.html?id=b2d5991b16e43cef9ac5071fbc516026&type=note&_time=1722......
  • 从k8s容器丢包事件中掌握内核参数优化技巧
    引言  在k8s的使用场景中,容器不是仅仅能运行就算ok,往往还需要进行容器的内核参数优化和应用程序参数的调优,如在高并发的业务场景下,运行一个java程序,我们不仅需要对其JVM参数进行调优,而且需要对其所在的容器进行内核参数优化,这篇文章主要通过一次容器丢包事件介绍容器中内......
  • k8s cronjob执行时间
    问题现象一般cronjob执行时间会比预期晚8小时。问题分析cronjob执行时区以kube-controller-manager为准,而kube-controller-manager默认是0时区。解决问题解决方式1kube-controller-manager容器挂载宿主机timezone,更改为东8区。解决方式2cronjob执行时间比预期减8小时。解决方......
  • k8s Deployment与StatefulSet:深入理解两种控制器的区别
    Kubernetes(k8s)是一个强大的容器编排平台,它提供了多种资源对象来管理容器化应用。在这些资源对象中,Deployment和StatefulSet是两种常见的控制器,它们用于不同场景下的容器应用管理。本文将深入探讨这两种控制器的区别,帮助你更好地理解它们在Kubernetes中的应用和选择。一、Kuber......
  • 【Kubernetes】初识K8S基础
    目录一.K8S概述1.K8S背景物理机的缺点虚拟机特点(解决了物理机的缺点)虚拟机缺点容器化特点(解决了虚拟机的缺点)容器化缺点2.K8S基本概念2.1.作用2.2.特点二.K8S 集群架构与组件1.集群架构介绍2.核心组件2.1.Master组件Kube-apiserver:是所有服务请求的统一访问入......
  • k8s集群部署prometheus
    目录一、Prometheus简介1.1、前言1.2、prometheus架构1.3、prometheus时间序列数据1.3.1、什么是序列数据?1.3.2、时间序列数据特点1.3.3、Promethues适合场景二、部署配置2.1、Prometheus部署2.1.1、创建命名空间2.1.2、创建RBAC规则2.1.3、创建ConfigMap类型的Prometheus配置文件......