Kubernets安装
前言:k8s是一个多宿主机的容器集群管理软件,编排多台宿主机上的容器,它是一个开源的系统,可以自动部署、扩缩、管理容器的应用程序
节点部署:
192.168.104.96 | Master |
192.168.104.97 | Node1 |
192.168.104.98 | Node2 |
1.修改主机名
[root@localhost ~]# hostnamectl set-hostname master [root@localhost ~]# bash [root@localhost ~]# hostnamectl set-hostname node1 [root@localhost ~]# bash [root@localhost ~]# hostnamectl set-hostname node2 [root@localhost ~]# bash |
2.修改hosts文件(/etc/hosts)
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.104.96 master 192.168.104.97 node1 192.168.104.98 node2 |
3.配置SSH免密登录(master节点)
(1)生成SSH密钥对
ssh-keygen
(2)查看公钥内容
(3)复制公钥到目标主机
[root@master ~]# ssh-copy-id node2 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host '192.168.104.97 (192.168.104.97)' can't be established. ECDSA key fingerprint is SHA256:zjxqiFWtjnGxqDD3cia4JIswmxh2h3wgJOTPH/YaM58. ECDSA key fingerprint is MD5:e4:46:b9:97:24:2b:93:a9:ec:a1:49:af:61:0b:89:7a. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Permission denied, please try again. [email protected]'s password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh '192.168.104.97'" and check to make sure that only the key(s) you wanted were added. #测试连接 [root@master ~]# ssh 192.168.104.97 Last login: Mon Sep 23 08:28:38 2024 from 192.168.110.1 [root@node1 ~]# exit 登出 Connection to 192.168.104.97 closed. [root@master ~]# ssh-copy-id master [root@master ~]# ssh-copy-id node1 |
4.永久关闭防火墙和SELINUX(所有节点执行)
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld [root@master ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config |
5.关闭swap分区(所有节点执行)
Kubelet要求必须禁用交换分区,所以kubeadm初始化时会检测swap是否关闭
[root@master ~]#swapoff -a #临时关闭swap [root@master ~]# echo vm.swappiness = 0 >> /etc/sysctl.conf [root@master ~]# sysctl -p vm.swappiness = 0 |
6.修改内核参数(所有节点执行)
将桥接的iPV4流量传递到iptables的链,开启ipv4转发需要加载br_netfilter模块
[root@master ~]# modprobe br_netfilter |
[root@master ~]# echo "modprobe br_netfilter" >> /etc/profile |
编辑文件/etc/sysctl.d/k8s.conf
[root@master ~]# tee /etc/sysctl.d/k8s.conf << EOF > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF |
重新加载配置
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 #查看后台,网桥过滤模块是否启动成功 [root@master ~]# lsmod |grep br_netfilter |
7.配置集群时间同步(所有节点执行)
[root@master ~]# yum install -y ntp ntpdate [root@master ~]# ntpdate cn.pool.ntp.org 23 Sep 09:17:39 ntpdate[2704]: adjust time server 193.182.111.14 offset 0.000844 sec [root@master ~]# systemctl start ntpd [root@master ~]# systemctl enable ntpd Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. |
8.配置ipvs功能
[root@master ~]# yum install -y ipvsadm [root@master ~]# vi /etc/sysconfig/modules/ipvs.modules #!/bin/bash Modprobe -- ip_vs Modprobe -- ip_vs_rr Modprobe -- ip_vs_wrr Modprobe -- ip_vs_sh Modprobe -- nf_conntrack_ipv4 |
赋予脚本文件执行权限
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
执行脚本文件
[root@master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
查看对应的模块是都加载成功
[root@master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
9.安装docker
[root@master ~]# yum install -y yum-utils \ > device-mapper-persistent-data \ > lvm2 yum-config-manager \ --add-repo \ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo |
[root@master ~]# yum install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 docker-compose-plugin containerd.io -y |
启动docker并设置开机自启动
[root@master ~]# systemctl start docker [root@master ~]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. |
验证docker安装
配置阿里云镜像加速器
[root@master ~]# cat > /etc/docker/daemon.json << EOF { "registry-mirrors": [ "https://do.nark.eu.org", "https://dc.j8.work", "https://docker.m.daocloud.io", "https://dockerproxy.com", "https://docker.mirrors.ustc.edu.cn", "https://docker.nju.edu.cn" ] } |
重新加载并重启
[root@master ~]# systemctl daemon-reload [root@master ~]# systemctl restart docker |
10.安装kubernets
(1)配置yum源
[root@master ~]# vi /etc/yum.repos.d/kubernets.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg |
(2)安装三大组件kubeadm、kubectl、kubelet(所有节点执行)
Kubeadm:用来初始化k8s集群的指令
Kubelet:在集群的每个节点上用来启动pod和容器等
Kubectl:用来与k8s集群通信的命令行工具,查看、创建、更新和删除各种资源
[root@master ~]# yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 在/etc/sysconfig/kubelet下添加配置 [root@master ~]# cat /etc/sysconfig/kubelet KUBELET_GROUP_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs" |
所有节点设置开机自启
[root@master ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. |
(3)添加主节点hosts(所有节点添加)
[root@master ~]# echo "192.168.110.150 cluster-endpoint" >> /etc/hosts |
(4)准备kubernets镜像
[root@master ~]# kubeadm config images list |
下载镜像:(由于kubernetes的镜像在国外,因为网络原因无法连接,所以更换一种方式下载镜像:从 阿里云镜像仓库 拉取 Kubernetes 所需的镜像,重新打标签为 Kubernetes 默认使用的镜像标签(k8s.gcr.io),以绕过网络限制,最后删除临时使用的阿里云镜像。)
(5)初始化k8s集群(master节点)
[root@master ~]# kubeadm init \ --kubernetes-version=v1.17.4 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.104.96 |
kubeadm init: 该命令用于初始化 Kubernetes 控制平面节点(即主节点)。它将安装和配置 Kubernetes 集群的基本组件,如 API Server、Controller Manager、Scheduler 等。
--kubernetes-version=v1.17.4: 指定要安装的 Kubernetes 版本,这里选择了 v1.17.4。如果不指定版本,kubeadm 会安装默认的最新稳定版。
--pod-network-cidr=10.244.0.0/16: 为集群中的 Pod 分配的 IP 地址范围(CIDR)。该参数指定了 Pod 网络的 IP 地址段。这里使用了 10.244.0.0/16,这是 Flannel 网络插件的默认 IP 段。如果你使用其他网络插件(如 Calico),可能需要配置不同的 CIDR。
--service-cidr=10.96.0.0/12: 为 Kubernetes 服务分配的 IP 地址范围。服务 IP 是用于服务的虚拟 IP,而不是具体的 Pod IP。该 CIDR 定义了 Kubernetes 集群中的所有服务的 IP 范围。
--apiserver-advertise-address=192.168.104.96: 指定 Kubernetes API Server 广播的 IP 地址。这是主节点的 IP 地址,供工作节点加入集群时访问 API Server 使用
等待加载时间,加载过后出现“Your Kubernetes control-plane has initialized successfully!
”表示初始化成功。
当看到successfully证明初始化成功
(6)创建必要文件:
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config |
(7)加入节点(所有node节点)
Node 节点加入集群成功
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 2m24s v1.17.4 node1 NotReady <none> 81s v1.17.4 node2 NotReady <none> 75s v1.17.4 |
(8)安装网络插件calico(所有节点执行)
[root@master ~]# curl https://docs.projectcalico.org/archive/v3.17/manifests/calico.yaml -O % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 183k 100 183k 0 0 56943 0 0:00:03 0:00:03 --:--:-- 56957 |
查询calico所需要的所有镜像
[root@master ~]# grep image calico.yaml image: docker.io/calico/cni:v3.17.6 image: docker.io/calico/cni:v3.17.6 image: docker.io/calico/pod2daemon-flexvol:v3.17.6 image: docker.io/calico/node:v3.17.6 image: docker.io/calico/kube-controllers:v3.17.6 |
通过docker pull拉取(所有节点)
(9)在master节点上应用calico.yaml文件
[root@master ~]# kubectl apply -f calico.yaml |
(10)查看节点状态
[root@master ~]# kubectl get nodes |
11.测试:
(1)部署nginx进行访问
[root@master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine deployment.apps/nginx created |
(2)暴露端口
[root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed |
(3)查看服务状态,等待一段时间查看服务状态
[root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-6867cdf567-qmxt5 0/1 ContainerCreating 0 16s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m service/nginx NodePort 10.104.174.221 <none> 80:32038/TCP 8s |