首页 > 其他分享 >kubernetes+docker+kubeadm快速安装

kubernetes+docker+kubeadm快速安装

时间:2023-10-26 21:22:06浏览次数:40  
标签:kubernetes etc -- kubelet master docker k8s kubeadm

1.Kubernetes 1.27 发布

2023年 4 月13 日,Kubernetes 1.27 正式发布,这是 2023 年的第一个版本。这个版本包括 60 项增强功能。其中 18 项增强功能进入 Alpha、29 项进入 Beta,还有 13 项升级为 Stable 稳定版。

2.环境准备

2.1主机操作系统

操作系统及版本 备注
CentOS7.9  

 

 

2.2主机配置说明

CPU 内存 硬盘 角色 主机名
4C 4G 50G master k8s-master
4C 4G 50G worker(node) k8s-node1
4C 4G 50G worker(node) k8s-node2
           

2.3 主机名配置

本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master;其中2台为worker节点,名称分别为:k8s-node1,k8s-node2

#master节点
hostnamectl set-hostname k8s-master
#worker01节点
hostnamectl set-hostname k8s-node1
#worker02节点
hostnamectl set-hostname k8s-node2

2.4 主机名与IP地址解析

所有集群主机均配置

cat >> /etc/hosts << EOF
10.50.88.214 k8s-master
10.50.88.215 k8s-node1
10.50.88.216 k8s-node2
EOF

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.50.88.214 k8s-master
10.50.88.215 k8s-node1
10.50.88.216 k8s-node2

2.5 关闭防火墙

所有主机均需要配置

 

关闭现有防火墙firewalld
# systemctl disable firewalld
# systemctl stop firewalld
# firewall-cmd --state
not running

2.6 SELINUX配置

所有主机均需配置

# setenforce 0
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

2.7 时间同步配置

所有主机均需要操作。使用ntpdate或者chrony配置同步

# yum -y install ntpdate
# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

2.8 升级操作系统内核

所有主机均需操作

#导入elrepo gpg key
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装elrepo YUM源仓库
yum -y install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
#安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本
yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
#设置grub2默认引导为0
grub2-set-default 0
#重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg
#更新后,需要重启,使用升级的内核生效。
reboot
#重启后,需要验证内核是否为更新对应的版本
uname -r

2.9 配置内核转发及网桥过滤

所有主机均需操作

#添加网桥过滤及内核转发配置文件
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

#加载br_netfilter模块
modprobe br_netfilter

#查看是否加载
lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

#加载网桥过滤及内核转发配置文件
sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

2.10 安装ipset及ipvsadm

所有主机均需操作

#安装ipset及ipvsadm
yum -y install ipset ipvsadm

#配置ipvsadm模块加载方式
#添加需要加载的模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF


#授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

2.11 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

#永远关闭swap分区,需要重启操作系统
swapoff -a
sed -i 's/.*swap.*/#&/g' /etc/fstab
cat /etc/fstab
......

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

在上一行中行首添加#

3. Docker准备

3.1 Docker安装YUM源准备

使用阿里云开源软件镜像站

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

3.2 Docker安装

yum -y install docker-ce

3.3 启动Docker服务

systemctl enable --now docker

3.4 修改cgroup方式

/etc/docker/daemon.json 默认没有此文件,需要单独创建

#在/etc/docker/daemon.json添加如下内容
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://84bkfzte.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://84bkfzte.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}


systemctl daemon-reload
systemctl restart docker
docker info

3.5 cri-dockerd安装

所有主机均需操作

#下载cri-dockerd安装包,注意这里可能需要用kexue上网
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm

#安装cri-dockerd
rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm

#修改镜像地址为国内,否则kubelet拉取不了镜像导致启动失败
vi /usr/lib/systemd/system/cri-docker.service

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

#启动cri-dockerd
systemctl daemon-reload 
systemctl enable cri-docker && systemctl start cri-docker

4. Kubernetes 1.27.0 集群部署

4.1 集群软件及版本说明

  kubeadm kubelet kubectl
版本 1.27.0 1.27.0 1.27.0
安装位置 集群所有主机 集群所有主机 集群所有主机
作用 初始化集群、管理集群等 用于接收api-server指令,对pod生命周期进行管理 集群应用命令行管理工具

 

 

 

 

 

4.2 kubernetes YUM源准备

所有节点均安装

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 

4.3 集群软件安装

#安装
yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0

4.4 配置kubelet

为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。

vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"


#设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
systemctl enable kubelet && systemctl restart kubelet

4.5 集群初始化

在master节点安装

注意apiserver-advertise-address地址修改成相应的IP,pod-network-cidr地址不要改变,因为安装Calico默认地址:
[root@k8s-master ~]# kubeadm init \
  --apiserver-advertise-address=10.50.88.214 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.27.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --cri-socket=unix:///var/run/cri-dockerd.sock \
  --ignore-preflight-errors=all
初始化过程输出
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0415 17:50:39.742407    3689 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.17.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0415 17:51:04.317762    3689 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002359 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 6t01k9.671ufvohi6l6fu7g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.50.88.214:6443 --token 6t01k9.671ufvohi6l6fu7g \
        --discovery-token-ca-cert-hash sha256:56d66ba010a67f0668f301984204f8e3f0c189bd4cba9ff20ce2289aabf24259

4.6 集群应用客户端管理集群文件准备

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# ls /root/.kube/
config

4.7 集群工作节点添加

在所有node节点执行

#注意需要在最后增加--cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm join 10.50.88.214:6443 --token wuo7ap.m69mh7ovnixosuy5 \
    --discovery-token-ca-cert-hash sha256:0f334c5529a29d65de5579451d28b7ea65489cf23054828edaf3cd402a42a276 --cri-socket=unix:///var/run/cri-dockerd.sock

5. 部署容器网络

在master节点执行

5.1 准备calico安装

#下载calico资源清单文件,注意可能需要kexue上网
wget https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml

5.2 安装calico

#应用资源清单文件,创建calico
kubectl apply -f calico.yaml

#查看kube-system命名空间中coredns状态,处于Running状态表明联网成功。
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6c99c8747f-rvzds   1/1     Running   0          4m13s
calico-node-f7b9l                          1/1     Running   0          4m13s
coredns-7bdc4cb885-8z2fz                   1/1     Running   0          18m
coredns-7bdc4cb885-gmpd7                   1/1     Running   0          18m
etcd-k8s-master                            1/1     Running   0          19m
kube-apiserver-k8s-master                  1/1     Running   0          19m
kube-controller-manager-k8s-master         1/1     Running   0          19m
kube-proxy-hs5sg                           1/1     Running   0          18m
kube-scheduler-k8s-master                  1/1     Running   0          19m

6. 验证集群可用性

#查看所有的节点
[root@k8s-master ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master   Ready    control-plane   4h25m   v1.27.0   10.50.88.214   <none>        CentOS Linux 7 (Core)   6.5.9-1.el7.elrepo.x86_64   docker://24.0.6
k8s-node1    Ready    <none>          4h11m   v1.27.0   10.50.88.215   <none>        CentOS Linux 7 (Core)   6.5.9-1.el7.elrepo.x86_64   docker://24.0.6
k8s-node2    Ready    <none>          4h11m   v1.27.0   10.50.88.216   <none>        CentOS Linux 7 (Core)   6.5.9-1.el7.elrepo.x86_64   docker://24.0.6

#查看集群健康情况

[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}

 

标签:kubernetes,etc,--,kubelet,master,docker,k8s,kubeadm
From: https://www.cnblogs.com/xiaowenyiyi/p/17790429.html

相关文章

  • Kubernetes跨StorageClass迁移,切换Rainbond默认SC
    基于主机安装或基于Kubernetes安装的Rainbond集群(均使用默认参数安装),默认使用的共享文件存储是NFS,以Pod方式运行在Kubernetes中,但这种方式也有一些无法避免的问题,比如:NFS的SVC无法通信时集群无法挂载存储则导致不能使用、服务器关机时卡在umount导致不能正常关机等等......
  • Kubernetes跨StorageClass迁移,切换Rainbond默认SC
    基于主机安装或基于Kubernetes安装的Rainbond集群(均使用默认参数安装),默认使用的共享文件存储是NFS,以Pod方式运行在Kubernetes中,但这种方式也有一些无法避免的问题,比如:NFS的SVC无法通信时集群无法挂载存储则导致不能使用、服务器关机时卡在umount导致不能正常关机等等......
  • 从Docker到Kubernetes
    1简介1.1什么是容器传统的虚拟化技术,比如VMWare,目标是创建完整的虚拟机。为了运行应用,除了部署应用本身及其依赖(通常几十MB),还得安装整个操作系统(几十GB)。如图所示,由于所有的容器共享同一个HostOS,这使得容器在体积上要比虚拟机小很多。另外,启动容器不需要启动整个操作系......
  • Docker
    第一章初识Docker1.1符合标准1.0待补充的部分[root@master~]#yuminstallwgetnet-toolsvimyum-utils-yyum--config-manager--add-repohttps://download.docker.com/linux/centos/docker-ce.repo1.2安装docker删除系统可能存在的docker[root@master~]#y......
  • Kubernetes 100个常用命令
    转载https://mp.weixin.qq.com/s/pWj-ni5fuHLaK2AR-4gqqQ100个Kubectl命令,这些命令对于诊断Kubernetes集群中的问题非常有用。这些问题包括但不限于:• 集群信息• Pod诊断• 服务诊断• 部署诊断• 网络诊断• 持久卷和持久卷声明诊断• 资源......
  • 如何传递环境变量给Docker容器
    在Linux命令行中,可以使用-e选项来传递环境变量给Docker容器。这样,我们可以在运行容器的同时设置特定的环境变量,以满足应用程序的需求。下面将详细介绍如何使用Linux命令行传递环境变量给Docker容器。1、DockerRun命令:最常用的方法是在使用dockerrun命令时,通过-e选项传递环......
  • docker 日志处理
    手动清理cat/dev/null>*-json.log启动时添加配置dockerrun--log-driver=json-file--log-optmax-size=10mmy-container脚本#!/bin/shecho"====================startcleandockercontainerslogs=========================="logs=$(find/var/lib/docker/cont......
  • docker-compose部署SASL认证的kafka
    前言测试服务器:10.255.60.149一.编写docker-compose文件1.docker-compose.ymlversion:'3.8'services:zookeeper:image:wurstmeister/zookeepervolumes:-/data/zookeeper/data:/data-/home/docker-compose/kafka/config:/opt/zookeeper-......
  • docker buildx https
    :443:connect:connectionrefuseddockerbuildx指定了配置文件,使用http,但却未生效,它还是使用httpsloadmetadata源数据。目前解决方案:升级目标网址的证书为https解决。ps:猜测是因为镜像moby/buildkit:buildx-stable-1的原因,但是目前还没有证据。......
  • docker常用命令总结
    docker常用命令总结:#查看本地docker镜像dockerimages#拉取远程镜像到本地dockerpullpig4cloud/java:8-jre#删除本地docker镜像dockerrmi镜像ID#根据dockerfile构建docker容器dockerbuild-tmes-md:0.3.#运行docker容器dockerrun-d-p8090:8090--ne......