首页 > 其他分享 >使用kubeadm安装k8s集群(v1.20.9)

使用kubeadm安装k8s集群(v1.20.9)

时间:2022-12-28 10:06:20浏览次数:50  
标签:kubernetes -- master01 v1.20 etc kubeadm k8s root

一、安装环境

  1. 系统:

[root@master1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@master1 ~]# uname -a
Linux master1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

  1. 配置

角色

IP

配置

k8s-master01

172.16.0.11

4核4G内存,虚拟机

k8s-master02

172.16.0.12

4核4G内存,虚拟机

k8s-master03

172.16.0.13

4核4G内存,虚拟机

k8s-node01

172.16.0.14

4核4G内存,虚拟机

k8s-node02

172.16.0.15

4核4G内存,虚拟机

二、基础配置(所有服务器都要配置)

  1. 更改服务器名称

hostnamectl set-hostname k8s-master01    #在master1上操作
hostnamectl set-hostname k8s-master02 #在master2上操作
hostnamectl set-hostname k8s-master03 #在master3上操作
hostnamectl set-hostname k8s-node01 #在node1上操作
hostnamectl set-hostname k8s-node02 #在node2上操作

  1. 添加域名绑定

cat >> /etc/hosts << EOF
172.16.0.11 cluster-endpoint
172.16.0.11 k8s-master01
172.16.0.12 k8s-master02
172.16.0.13 k8s-master03
172.16.0.14 k8s-node1
172.16.0.15 k8s-node2
EOF

  1. 基础配置

#1.关闭selinux
setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#2.关闭swap分区
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

#3.允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system #让配置生效

#4.设置时区为上海
timedatectl set-timezone Asia/Shanghai

#5.安装及开启时间同步
yum install chrony-3.4-1.el7.x86_64 -y && systemctl enable chronyd && systemctl start chronyd

#6.关闭防火墙
systemctl stop firewalld.service && systemctl disable firewalld.service

  1. 安装docker

#1.删掉系统中旧的docker
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

#2.安装依赖及配置源
yum install -y yum-utils
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#3.安装docker
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

#4.配置docker开机启动
systemctl enable docker --now

#5.配置镜像加速(这是我个人的阿里云镜像加速,大家也可以使用)
mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://sx15mtuf.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload && systemctl restart docker

三、安装kubelet、kubeadm、kubectl(所有服务器都要配置)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

systemctl enable --now kubelet

四、下载各个机器需要使用的镜像(所有服务器都要配置)

tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

chmod +x ./images.sh && ./images.sh

五、初始化主节点

  1. 初始化主节点

kubeadm init \
--apiserver-advertise-address=172.16.0.11 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

注意:这个是在k8s-master1上操作的,其中第二行的ip地址要改为你master1的ip地址
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \
--discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \
--discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce
[root@k8s-master01 ~]#

  1. 在k8s-master01上运行如下命令

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. 查看集群状态

[root@k8s-master01 ~]# kubectl get nodes      #查看集群所有节点
[root@k8s-master01 ~]# kubectl get pods -A #查看集群运行的所有pod

六、安装网络组件(k8s-master01上安装即可)

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

七、加入node节点(这里以k8s-node01示例)

  1. 加入node节点
[root@k8s-node01 ~]# kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \
> --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s-node01" could not be reached
[WARNING Hostname]: hostname "k8s-node01": lookup k8s-node01 on 172.16.0.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node01 ~]#
  1. 在master节点上查看所有节点信息
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 39m v1.20.9
k8s-node01 Ready <none> 3m12s v1.20.9

八、加入master节点(这里以k8s-master02示例)

  1. 直接加入,会提示证书错误
[root@k8s-master02 ~]# kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf     --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce     --control-plane
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.

failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher
  1. 我们先把k8s相关证书拷贝过来
mkdir -p /etc/kubernetes/pki/etcd
scp root@k8s-master01:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/ca.key /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/sa.key /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/sa.pub /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki/
scp root@k8s-master01:/etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/
scp root@k8s-master01:/etc/kubernetes/pki/etcd/ca.key /etc/kubernetes/pki/etcd/
scp root@k8s-master01:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf
  1. 再加入,就OK了
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
  1. 配置环境变量等
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 查看所有集群情况
[root@k8s-master02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane,master 19h v1.20.9
k8s-master02 Ready control-plane,master 19m v1.20.9
k8s-master03 Ready control-plane,master 12m v1.20.9
k8s-node01 Ready <none> 18h v1.20.9
k8s-node02 Ready <none> 18h v1.20.9











标签:kubernetes,--,master01,v1.20,etc,kubeadm,k8s,root
From: https://blog.51cto.com/u_5147178/5973806

相关文章