测试环境中很多是没有连外网的,在这种环境下安装k8s相对麻烦一点,本篇展示一下如何在没有外网的环境当中安装k8s。
为了在离线环境当中安装,需要额外准备一台可以连接外网的机器,且这台机器可以向离线机器传输文件,以下称之为外网机器。
安装k8s大致分为两步,安装binary文件包括kubectl,kubeadm,kubelet;准备容器镜像,包括kube-apiserver,kube-controller等。
准备rpm包,为了克服包依赖问题,需要保证外网机器与离线机器有相同的环境,即相同版本的OS,可以通过在外网机器上安装虚拟机解决OS版本问题。
首先在离线环境中安装docker,这一步较为容易,可以参考其他文章,此处略过。
假设外网环境已经搭建好,首先下载kubectl, kubeadm,kubelet rpm包,由于默认的yum源在境外,改为国内源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum update之后下载rpm包。
yum install -y kubelet kubeadm kubectl --downloadonly --downloaddir rpm/
这样会将rpm包下载到rpm目录下而并没有安装,你也可以指定版本,这里默认是下载最新的稳定版。
将rpm下的包传输到离线机器中用下面的命令安装。
#假设包在rpm里面 yum -ivh rpm/*
安装过程可能会出现包依赖问题,依赖的包依然需要通过外网机器来下载。假设socat是一个依赖包,回到外网机器。
yum remove -y socat && yum install -y socat --downloadonly --downloaddir rpm/
将socat的rpm包传输至离线机器,继续安装,如果再次出现依赖包,重复这个过程。
k8s在初始化的时候会下载相关的容器镜像,这些容器镜像的版本需要能适配上面安装的binary的版本。可以用以下办法来找到所有需要下载的容器镜像。
回到外网环境,执行以下命令:
kubeadm init \ --kubernetes-version 1.28.8 \ --apiserver-advertise-address=172.17.0.22 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.245.0.0/16 \ --image-repository registry.aliyuncs.com/google_containers --v=5
apiserver那一行需改ip为本机ip。执行上述命令后会打印出kubeadm在初始化时拉取的容器镜像,类似于:
I0408 10:41:19.264377 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8 I0408 10:41:59.144378 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.8 I0408 10:42:37.276040 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.8 I0408 10:43:14.380409 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/kube-proxy:v1.28.8 W0408 10:44:02.182235 8355 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is I0408 10:44:02.214810 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/pause:3.9 I0408 10:44:37.559221 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/etcd:3.5.9-0 I0408 10:45:25.609190 8355 checks.go:854] pulling: registry.aliyuncs.com/google_containers/coredns:v1.10.1
这样我们就找到了所有的容器镜像名称及版本。
拉取镜像,比如kube-apiserver:v1.28.8:
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8
打包image
docker save registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8 -o kube-apiserver.tar
打包好所有的image后传输至离线环境。
在离线环境搭建本地registry。
由于离线环境无法访问外部镜像仓库,我们需要搭建本地镜像仓库。使用registry镜像搭建比较方便。用上述方法将registry镜像包准备好并传输是离线环境。
docker load -i registry.tar //load registry包,这样就可以使用registry镜像了 mkdir registry docker run -d -p 5000:5000 --restart=always --name registry -v $pwd/registry:/var/lib/registry registry:latest
将k8s相关的容器镜像push到本地仓库。
load所有容器image
docker load -i kube-apiserver.tar ...
给容器镜像打tag。
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.8 $hostname:5000/kube-apiserver:v1.28.8
push镜像到本地仓库
docker push $hostname:5000/kube-apiserver:v1.28.8
现在可以尝试初始化k8s了。
kubeadm init \ --kubernetes-version 1.28.8 \ --apiserver-advertise-address=10.65.42.125 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.245.0.0/16 \ --image-repository localhost:5000 --v=5
将apiserver那一项改为本机ip。你可能碰到没有containerd.sock的问题,解决的办法是从其他机器上copy一个可用的containerd.service,重启containerd。
# Copyright The containerd Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this version. TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target
你可能还需要自建containerd的config。使用mkdir /etc/containerd && containerd config default > /etc/containerd/config.toml,修改sandbox_image那一项为
sandbox_image = "localhost:5000/pause:3.9"
初始化成功之后可能node还没有ready。安装cni。
一般此时cni的binary已经安装,只需加一个cni配置即可:
cat << EOF | tee /etc/cni/net.d/10-containerd-net.conflist { "cniVersion": "1.0.0", "name": "containerd-net", "plugins": [ { "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "promiscMode": true, "ipam": { "type": "host-local", "ranges": [ [{ "subnet": "10.88.0.0/16" }], [{ "subnet": "2001:db8:4860::/64" }] ], "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "::/0" } ] } }, { "type": "portmap", "capabilities": {"portMappings": true}, "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } EOF
此时再查看node状态,应该已经是ready了。
标签:kube,centos,--,离线,apiserver,registry,镜像,k8s From: https://www.cnblogs.com/banshanjushi/p/18123331