注:如下文档不是一次性搭建成功的,可能有一些地方ip地址和实际成功的那次地址不一样。
首先,我们准备3台虚拟机,配置都是是2核心2.2G内存
192.168.3.121 k8s-master 192.168.3.133 k8s-node1 192.168.3.119 k8s-node2
1 修改主机名
经过实践,发现,如果不修改主机名的话,安装可能会失败(但是在第一次安装的时候没修改主机名也可以,这里不细究原因了)
#master节点执行 hostnamectl set-hostname k8s-master
#node1节点执行 hostnamectl set-hostname k8s-node1
#node2节点执行 hostnamectl set-hostname k8s-node2
然后reboot重启
2 基本软件安装(3个节点都执行)
2.1 修改hosts文件
192.168.3.121 k8s-master 192.168.3.133 k8s-node1 192.168.3.119 k8s-node2
2.2 防火墙操作
systemctl stop firewalld NetworkManager systemctl disable firewalld NetworkManager
2.3 iptables
[root@localhost ~]# sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config [root@localhost ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX=disabled # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted [root@localhost ~]# setenforce 0 [root@localhost ~]# systemctl disable firewalld && systemctl stop firewalld [root@localhost ~]# getenforce 0 Permissive [root@localhost ~]# iptables -F [root@localhost ~]# iptables -X [root@localhost ~]# iptables -Z [root@localhost ~]# iptables -P FORWARD ACCEPT [root@localhost ~]#
2.4 关闭swap
[root@localhost ~]# swapoff -a [root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2.5 yum源配置
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/*.repo yum clean all && yum makecache fast
2.6 时间同步软件
修改配置文件:vi /etc/chrony.conf,加入如下行
server ntp.aliyun.com iburst
然后执行命令
yum install chrony -y systemctl start chronyd systemctl enable chronyd date hwclock -w
2.7 网络配置相关
[root@localhost ~]# cat <<EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > net.ipv4.ip_forward=1 > vm.max_map_count=262144 > EOF [root@localhost ~]#
2.8 修改linux内核参数,开启数据包转发功能
# 容器跨主机通信,底层走的iptables,内核级别的数据包转发 [root@localhost ~]# modprobe br_netfilter # 下面命令用于加载读取内核参数配置文件 [root@localhost ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.max_map_count = 262144 [root@localhost ~]#
2.9 安装docker
yum remove docker docker-common docker-selinux docker-engine -y curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum list docker-ce --showduplicates yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 -y
配置docker加速器、以及cgroup驱动,改为k8s官方推荐的systemd,否则初始化时会报错
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << 'EOF' {"registry-mirrors": ["http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],"exec-opts":["native.cgroupdriver=systemd"]} EOF
systemctl start docker && systemctl enable docker
2.10 安装k8s初始化工具
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key-gpg EOF yum clean all && yum makecache yum list kubeadm --showduplicates 安装指定版本 yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 ipvsadm
然后我们查看一下kubeadm版本
设置kubelet开机启动
[root@localhost ~]# systemctl enable docker [root@localhost ~]# systemctl enable kubelet
3 初始化master节点(只在master节点执行)
kubeadm init --apiserver-advertise-address=192.168.3.121 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.3 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.2.0.0/16 --service-dns-domain=cluster.local --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU
参数说明
--apiserver-advertise-address=192.168.3.121 #master节点的ip
--service-cidr=10.1.0.0/16 #k8s服务发现网段设置,service网段
--pod-network-cidr=10.2.0.0/16 #设置pod创建后的运行网段地址
--service-dns-domain=cluster.local #k8s服务发现网段设置,service资源的域名后缀
--ignore-preflight-errors=Swap #忽略swap报错
--ignore-preflight-errors=NumCPU #忽略CPU数量报错
执行完毕后的信息
提示control-plane初始化成功后,执行红框的命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
此时master节点配置完成了
4 将两个node节点添加到集群中
在两个node节点执行上一步生成的命令
[root@localhost ~]# kubeadm join 192.168.3.103:6443 --token 3fgatp.vubqh698q0cb67pb \ > --discovery-token-ca-cert-hash sha256:1920df0215470b614b7faee4abbd79da2b00ec36a8865b9730d86530c25277a6
此时node机器就可以和master机器通信了,走kubelet进程
执行成功后我们去master节点查看
5 部署容器网络(master节点执行)
wget https://docs.projectcalico.org/v3.20/manifests/calico.yaml --no-check-certificate
下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的
–pod-network-cidr指定的一样
vi calico.yaml
保存后执行如下命令
kubectl apply -f calico.yaml
再次查看节点状态
建立网络通信需要一个过程,大约几分钟后节点状态就会变为Ready状态
至此,k8s集群环境搭建完毕。
标签:--,yum,集群,docker,k8s,root,localhost,搭建 From: https://www.cnblogs.com/zhenjingcool/p/17413490.html