一,首先配置hosts文件
k8s-master-01 192.168.56.101 #master节点 k8s-master-02 192.168.56.102 #master节点 k8s-master-03 192.168.56.106 #master节点 k8s-node-01 192.168.56.107 #node节点 k8s-node-02 192.168.56.108 #node节点 k8s-vip 192.168.56.200 # vip
二,我这里是使用的CentOS7 ,安装yum源 以及必备工具
安装yum源:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
必备工具:
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
三,所有节点关闭firewalld 、dnsmasq、selinux(CentOS7需要关闭NetworkManager,CentOS8不需要)
systemctl disable --now firewalld systemctl disable --now dnsmasq #有报错可忽略 systemctl disable --now NetworkManager 这步有坑,谨慎操作,必要是可不必操作 setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
四,所有节点关闭swap分区,fstab注释swap
1 swapoff -a && sysctl -w vm.swappiness=0 2 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
五,同步时间,所有节点
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com # 加入到crontab */5 * * * * /usr/sbin/ntpdate time2.aliyun.com
六,配置limint
1 ulimit -SHn 65535 2 3 vim /etc/security/limits.conf 4 # 末尾添加如下内容 5 * soft nofile 65536 6 * hard nofile 131072 7 * soft nproc 65535 8 * hard nproc 655350 9 * soft memlock unlimited 10 * hard memlock unlimited
七,Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下
ssh-keygen -t rsa for i in k8s-master-01 k8s-master-02 k8s-master-03 k8s-node01 k8s-node-02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
八,所有节点安装基本工具
1 yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git -y
九,升级内核,我这里使用的4.19
1 cd /root 2 wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm 3 wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
所有节点安装内核:
cd /root && yum localinstall -y kernel-ml*
十,检查内核
1 grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 2 3 grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" 4 5 grubby --default-kernel
十一,所有节点安装ipvsadm,所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可
导入模块 modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack 写入模块,开机自启 vim /etc/modules-load.d/ipvs.conf ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip 重启服务 systemctl enable --now systemd-modules-load.service 检查是否加载 lsmod | grep -e ip_vs -e nf_conntrack
十二,开启内核参数,开启一些k8s集群中必须的内核参数,所有节点配置k8s内核
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
十二,重启所有节点并检查内核是否升级成功
reboot lsmod | grep --color=auto -e ip_vs -e nf_conntrack uname -a
基础环境搭建完毕
我使用的是VirtualBox 不是很习惯,还不知道怎么批量操作所有写了一些小脚本
1 #!/bin/bash 2 3 action=$2 4 user=$1 5 master(){ 6 master_list=( 192.168.56.102 192.168.56.106 ) 7 for i in ${master_list[*]} 8 do 9 ssh root@$i "$action" 10 done 11 } 12 13 node_master(){ 14 all_list=( 192.168.56.102 192.168.56.106 192.168.56.107 192.168.56.108 ) 15 for i in ${all_list[*]} 16 do 17 ssh root@$i "$action" 18 done 19 } 20 main(){ 21 case $user in 22 master) 23 master 24 ;; 25 node) 26 node_master 27 ;; 28 *) 29 echo "参数有误" 30 esac 31 } 32 main
标签:--,containerd,vs,master,ip,net,k8s From: https://www.cnblogs.com/slx-yyds/p/16706408.html