首页 > 其他分享 >kubeadm安装k8s集群

kubeadm安装k8s集群

时间:2023-10-19 11:16:26浏览次数:35  
标签:kube kubernetes 0.0 master 集群 kubeadm k8s docker

kubeadm安装k8s集群

一、机器准备(所有的master和node节点需要执行)

部署k8s集群的节点按照用途可以划分为如下2类角色:

  • master:集群的master节点,集群的初始化节点,基础配置不低于2c 4g
  • slave:集群的slave节点,可以多台,基础配置不低于1c 2g
主机名、节点ip、部署组件

k8s-master 10.0.0.10 etcd, kube-apiserver, kube-controller-manager, kubectl, kubeadm, kubelet, kube-proxy, flannel

k8s-node1  10.0.0.11 kubeadm, kubectl, kubelet, kube-proxy, flannel,docker
k8s-node2  10.0.0.12 kubeadm, kubectl, kubelet, kube-proxy, flannel,docker

hosts解析

cat  >>/etc/hosts <<'EOF'
10.0.0.10 k8s-master-10 
10.0.0.11 k8s-node-11
10.0.0.12 k8s-node-12
EOF

ping -c 2 k8s-master-10
ping -c 2 k8s-node-11
ping -c 2 k8s-node-12

调整系统配置

如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,60080,60081 UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开

设置iptables

systemctl stop firewalld NetworkManager
systemctl disable firewalld NetworkManager

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld

getenforce 0

iptables -F
iptables -X
iptables -Z

iptables -P FORWARD ACCEPT

关闭swap

swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

配置阿里云源

curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo

curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum clean all && yum makecache fast

确保ntp、网络正确

yum install chrony -y

systemctl start chronyd
systemctl enable chronyd

date
hwclock -w

ping -c 2 baidu.com

修改内核参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

二、通过kubeadm安装k8s

kubeadm 是 Kubernetes 主推的部署工具之一,将k8s的组件打包为了镜像,然后通过kubeadm进行集群初始化创建。

安装docker(所有的master和node节点需要执行)

# 可添加到脚本文件执行
# 配置阿里源
yum remove docker docker-common docker-selinux docker-engine -y 

curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum makecache fast

yum list docker-ce --showduplicates

yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 -y

# 配置docker加速器、以及crgoup驱动,改为k8s官方推荐的systemd,否则初始化时会有报错。

mkdir -p /etc/docker

cat > /etc/docker/daemon.json <<'EOF'
{
  "registry-mirrors" : [
    "https://iq1ib072.mirror.aliyuncs.com"],
    "exec-opts":["native.cgroupdriver=systemd"]
}
EOF

# 启动
systemctl daemon-reload && systemctl start docker && systemctl enable docker

安装kubeadm工具(所有的master和node节点需要执行)

# 可添加到脚本文件执行
# 设置阿里云源

curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache

#yum list kubeadm --showduplicates  列出阿里云k8s源,提供了哪些k8s版本让你用
#安装指定版本kubeadm-1.19.3,安装kubeadm的版本就是决定了拉取什么版本的k8s集群镜像
yum install kubelet-1.19.3 kubeadm-1.19.3   kubectl-1.19.3 ipvsadm

# 查看kubeadm 版本
kubeadm version
# kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", Go

# 设置kubelet开机启动
# 该工具用于建立起k8s集群,master,node之间的联系。
systemctl enable kubelet

查看当前占用端口

netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1019/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1247/master         
tcp6       0      0 :::22                   :::*                    LISTEN      1019/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1247/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           1650/chronyd        
udp6       0      0 ::1:323                 :::*                                1650/chronyd    

初始化master节点(master)

kubeadm init \
--apiserver-advertise-address=10.0.0.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--service-dns-domain=cluster.local \
--ignore-preflight-errors=Swap \
--ignore-preflight-errors=NumCPU

# 参数解释
kubeadm init \                                                         
--apiserver-advertise-address=10.0.0.10 \                        API-Server地址
--image-repository registry.aliyuncs.com/google_containers \     镜像仓库地址      
--kubernetes-version v1.19.3 \                                   k8s版本,需要与kubeadm的版本一致       
--service-cidr=10.1.0.0/16 \                                     clusterIP 
--pod-network-cidr=10.2.0.0/16 \                                 podIP网段,定义自己想要的网络段       
--service-dns-domain=cluster.local \                             集群内dns后缀    
--ignore-preflight-errors=Swap \                                 忽略swap报错    
--ignore-preflight-errors=NumCPU                                 忽略CPU报错
# 初始化过程会去阿里云下载镜像,最终的初始化结果务必保留好
W1018 17:52:21.939904    5332 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-10 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 10.0.0.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-10 localhost] and IPs [10.0.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-10 localhost] and IPs [10.0.0.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.507001 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master-10 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-10 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yew79j.ob41k8e2rp340pvu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.10:6443 --token yew79j.ob41k8e2rp340pvu \
    --discovery-token-ca-cert-hash sha256:fcaef0c918aa146ce08d4816dcc4f194bb6933edb7ee055a63bfd3b506d6634f

整个初始化过程基本是
- 生成证书,集群内组件通信走证书加密
- 创建k8s组件配置文件
- 控制平面pod生成,几大组件生成
-生成认证、授权规则
-生成插件coredns,kube-proxy
-以及根据最后的要求,创建配置文件、设置网络插件、Node加入集群即可。

查看生成的端口,组件进程信息

netstat -tunlp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      16367/kubelet       
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      16681/kube-proxy    
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      16095/etcd          
tcp        0      0 10.0.0.10:2379          0.0.0.0:*               LISTEN      16095/etcd          
tcp        0      0 10.0.0.10:2380          0.0.0.0:*               LISTEN      16095/etcd          
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      16095/etcd          
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      16035/kube-controll 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      16048/kube-schedule 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1019/sshd           
tcp        0      0 127.0.0.1:40311         0.0.0.0:*               LISTEN      16367/kubelet       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1247/master         
tcp6       0      0 :::10250                :::*                    LISTEN      16367/kubelet       
tcp6       0      0 :::6443                 :::*                    LISTEN      16043/kube-apiserve 
tcp6       0      0 :::10256                :::*                    LISTEN      16681/kube-proxy    
tcp6       0      0 :::22                   :::*                    LISTEN      1019/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1247/master         
udp        0      0 127.0.0.1:323           0.0.0.0:*                           1650/chronyd        
udp6       0      0 ::1:323                 :::*                                1650/chronyd        
[root@k8s-master-10 ~]#

根据要求设置master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看k8s节点信息

[root@k8s-master-10 ~]#kubectl get nodes
NAME            STATUS     ROLES    AGE     VERSION
k8s-master-10   NotReady   master   3h14m   v1.19.3

⚠️注意:此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件

若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可
报错提示,可能出现警告信息:
1. NumCPU
2. IsDockerSystemdCheck
3. SystemVerification

部署Flannel网络插件(master)

# 下载flannel,网址如下:
https://github.com/coreos/flannel.git
上传到服务器指定位置解压

修改网络插件信息(master)

vim /root/flannel-master/Documentation/kube-flannel.yml

因为自定义了PODIP网段,所有这里也需要修改,你也可以选择初始化的的时候不改PODIP网段
第一处,指定flannel创建的局域网段
  net-conf.json: |
    {
      "Network": "10.2.0.0/16", #这个ip和初始化master节点的pod-network-cidr参数设置一致
      "Backend": {
        "Type": "vxlan"
      }
    }

第二处,如果机器存在多网卡的话,指定外网网卡的名称,默认不指定的话会找第一块网卡

150       containers:
151       - name: kube-flannel
152        #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
153         image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
154         command:
155         - /opt/bin/flanneld
156         args:
157         - --ip-masq
158         - --kube-subnet-mgr
159         - --iface=ens33  #这个参数值为可以连外网的网卡名,通过ip a可以查看 

应用yaml清单创建flannel(master)

# 过程是创建pod、创建容器
[root@k8s-master-10 ~/flannel-master/Documentation]#kubectl create -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

# 创建的这些资源都是可以查看的
# flannel这种网络插件,以Daemonset控制器去运行pod,确保在主机上只有一个。


# 检查关于flannel的容器信息
kubectl get pods -n kube-flannel

# 查看docker容器
docker ps|grep flannel

确保flannel创建正确(master)

kubectl get pods -n kube-system

image

Node节点加入集群(node)

kubeadm join 10.0.0.10:6443 --token yew79j.ob41k8e2rp340pvu \
    --discovery-token-ca-cert-hash sha256:fcaef0c918aa146ce08d4816dcc4f194bb6933edb7ee055a63bfd3b506d6634f
    

如果忘记添加命令,可以通过如下命令生成:
kubeadm token create --print-join-command

查看k8s集群所有节点状态(master)

[root@k8s-master-10 ~/flannel-master/Documentation]#kubectl get node --show-labels=true -o wide
NAME            STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME   LABELS
k8s-master-10   Ready    master   4h9m    v1.19.3   10.0.0.10     <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://19.3.15    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-10,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node-11     Ready    <none>   5m10s   v1.19.3   10.0.0.11     <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://19.3.15    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-11,kubernetes.io/os=linux
k8s-node-12     Ready    <none>   5m6s    v1.19.3   10.0.0.12     <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://19.3.15    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-12,kubernetes.io/os=linux
[root@k8s-master-10 ~/flannel-master/Documentation]#

配置k8S命令补全(master)

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

三、验证集群是否可用

$ kubectl get nodes  #观察集群节点是否全部Ready

创建测试nginx:1.17.9服务测试

kubectl run  yuchao-nginx --image=nginx:1.17.9

kubectl get pod -o wide
NAME           READY   STATUS              RESTARTS   AGE   IP       NODE          NOMINATED NODE   READINESS GATES
yuchao-nginx   0/1     ContainerCreating   0          17s   <none>   k8s-node-11   <none>           <none>

[root@k8s-master-10 ~/flannel-master/Documentation]#kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP         NODE          NOMINATED NODE   READINESS GATES
yuchao-nginx   1/1     Running   0          2m47s   10.2.1.2   k8s-node-11   <none>           <none>

# 查看pod描述信息
kubectl describe pod yuchao-nginx 

访问集群内nginx
在node节点上创建的nginx容器名字,命名规则默认是 k8s_容器名_pod名_命名空间_随机id

可以去修改容器内信息,然后再访问
[root@k8s-node-11 ~]#docker exec 772 bash -c 'echo "k8s部署完成了" > /usr/share/nginx/html/index.html'
[root@k8s-master-10 ~]#curl 10.2.1.2
k8s部署完成了

四、删除k8s

如果部署过程出错,可以选择快照还原、或者环境清理
# 在全部集群节点执行
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /run/flannel/subnet.env
rm -rf /var/lib/cni/
mv /etc/kubernetes/ /tmp
mv /var/lib/etcd /tmp
mv ~/.kube /tmp
iptables -F
iptables -t nat -F
ipvsadm -C
ip link del kube-ipvs0
ip link del dummy0

标签:kube,kubernetes,0.0,master,集群,kubeadm,k8s,docker
From: https://www.cnblogs.com/chunjeh/p/17774251.html

相关文章

  • k8s pv与pvc
    k8spv与pvc概念PV我们想要持久化k8spod中的数据,就需要用到存储,找一个地方,存放数据。不然一旦pod被删除,则数据就丢失了。在k8s中,pv就是存储数据的地方,可以理解为pv就是存储后端。pv可以由多种存储系统提供,如NFS,GFS,本地,CIFS,云存储集群等PVC用户想要使用PV,就需要申请,说明要......
  • hadoop集群 大数据项目实战_电信用户行为分析_day03
    配置系统环境  Reis1.先把之前的dump.rdb删除掉rm-rfdump.rdb 2.把原始项目给的dump.rdb放进来,它里面包含了需要的数据,比如端口;在这部之前必须要进行关闭端口,随后传送文件,最后重启端口相关指令:   bin/redis-server conf/redis.conf   bin/redis-cli  bin......
  • 204 K8S API资源对象介绍03 (Job CronJob Endpoint ConfigMap Secret) 2.12-2.16
    一、API资源对象Job一次性运行后就退出的Pod1.1使用kubect生成YAML文件#kubectlcreatejobjob01--image=busybox--dry-run=client-oyaml>job01.yaml#vimjob01.yaml#catjob01.yamlapiVersion:batch/v1kind:Jobmetadata:creationTimestamp:nullnam......
  • ES集群调优建议
    9 ES集群调优建议9.1内核参数优化#对于操作系统,需要调整几个内核参数[root@node~]#vim/etc/sysctl.conffs.file-max=655360#设定系统最大打开文件描述符数,建议修改为655360或者更高,vm.max_map_count=262144#用于限制一个进程可以拥有的虚拟内存大小,建议修改成262144......
  • 如何查看Kubernetes集群中哪个Pod占用CPU最高?
    下载MetricsServer的部署文件:wgethttps://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml编辑下载的components.yaml文件,以便MetricsServer可以与kubelet进行安全通信。spec:containers:-args:......
  • ZooKeeper集群版本升级
     ZooKeeper集群版本升级 环境描述:3节点的集群,当前版本为3.8.1,计划将所有节点版本升级到3.8.3。由于过半机制,即存活的节点数量>(非>=)所有节点数量的一半,则整个集群可以正常对外提供服务。举个例子,3个节点,最少存活2>(3/2=1.5)个节点,即允许有一个节点宕机下依旧能够对外服务。......
  • 十四、kubernetes日志收集之Loki收集K8s日志
    3.使用Loki收集K8s日志3.1架构说明无论是ELK、EFK还是Filebeat,都需要用到Elasticsearch来存储数据,Elasticsearch本身就像“一座大山”,维护难度和资源使用都是偏高的。对于很多公司而言,特别是新创公司,可能并不想大费周章地去搭建一个ELK、EFK或者其他重量级的日志平台,刚开始的人力......
  • kubeadm init 报错ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
    现象:[ERRORFileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:/proc/sys/net/bridge/bridge-nf-call-iptablescontentsarenotsetto1原因:  /proc/sys/net/bridge/bridge-nf-call-iptables 文件的内容并没有设置为1解决方案echo"1">/proc/sys/net/br......
  • kubeadm 加入work 节点集群时报 http://localhost:10248/healthz处理方法
    现象:[kubelet-check]TheHTTPcallequalto'curl-sSLhttp://localhost:10248/healthz'failedwitherror:Get"http://localhost:10248/healthz":dialtcp127.0.0.1:10248:connect:connectionrefused.[kubelet-check]Itseemslikethekube......
  • Base虚拟机克隆集群节点,并固定IP与免密互通
    克隆Base虚拟机先把Base关机,然后右键-管理-克隆选择完整克隆克隆名字这里叫node1重复步骤,克隆node2/node3为了分类,创建了一个大数据集群文件夹以下命令全是root权限执行配置固定IP#修改主机名hostnamectlset-hostnamenode1#修改IPvim/etc/sysconfig/ne......