-
前提条件
-
Windows上安装Docker Desktop+WSL。
-
麒麟V10 k8s系统初始化的依赖已下载(若没下载过,可参考上篇至鲲鹏麒麟服务器下载或Windows手动下载)
-
开始制作
2.1 创建目录
进入E:\KubeSphere
后打开终端(cmd),输入wsl后进入子系统,创建arm目录
2.2 下载kk
- 方式一
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm# export KKZONE=cn
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm# curl -sfL https://get-kk.kubesphere.io | VERSION=v3.1.5 sh -
Downloading kubekey v3.1.5 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.1.5/kubekey-v3.1.5-linux-amd64.tar.gz ...
Kubekey v3.1.5 Download Complete!
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm# ls
kk kubekey-v3.1.5-linux-amd64.tar.gz
- 方式二
使用本地电脑,直接去github下载 Releases · kubesphere/kubekey
上传至服务器/root/kubesphere目录解压
tar zxf kubekey-v3.1.5-linux-amd64.tar.gz
本地Windows使用amd版本kk,实际部署时使用arm64版本,所以还需要手动下载kubekey-v3.1.5-linux-arm64.tar.gz
2.3 编辑制品配置文件
在使用官方文档示例生成制品时出现了各种镜像错误,这里不再下载镜像(旧版本kk需要下载最少一个镜像)。镜像通过编写shell脚本处理。操作系统的iso也不再下载,使用第一步制作的依赖包。
优势
-
制品体积更小
-
镜像变动更灵活
-
组件按需增加/减少
劣势
-
额外编写更多脚本
-
额外增加离线部署过程
vim manifest-kylin.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- arm64
operatingSystems:
- arch: arm64
type: linux
id: kylin
version: "V10"
osImage: Kylin Linux Advanced Server V10
repository:
iso:
localPath:
url:
kubernetesDistributions:
- type: kubernetes
version: v1.25.16
components:
helm:
version: v3.14.3
cni:
version: v1.2.0
etcd:
version: v3.5.13
containerRuntimes:
- type: docker
version: 24.0.9
- type: containerd
version: 1.7.13
calicoctl:
version: v3.27.3
crictl:
version: v1.29.0
docker-registry:
version: "2"
备注:这里使用docker-registry作为仓库,如果需要harbor可采取使用该方式部署完成后另外安装harbor,也可参考之前文章,直接装harbor鲲鹏(arm64)+麒麟(kylin v10)离线部署kubesphere(含离线部署新方式)
2.4 导出离线文件
export KKZONE=cn
./kk artifact export -m manifest-kylin.yaml -o ks3.4-artifact.tar.gz
导出完成
2.5 手动拉取k8s相关镜像
vim pull-images.sh
#!/bin/bash
# k8s 1.25 变化的版本
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.25.16
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.25.16
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.25.16
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.25.16
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.8
#
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
docker pull --platform=linux/arm64 registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1
查看下载情况
2.6 重命名镜像
vim tag-images.sh
根据自己docker-regisrty/harbor仓库名称修改harbor地址和项目名称
#!/bin/bash
# k8s 1.25 变化的版本
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.25.16 kube-apiserver:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.25.16 kube-controller-manager:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.25.16 kube-scheduler:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.25.16 kube-proxy:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3 coredns:1.9.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.8 pause:3.8
#ks3.4.1和未变化版本
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3 kube-controllers:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3 cni:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3 pod2daemon-flexvol:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3 node:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1 ks-installer:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1 ks-console:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1 ks-controller-manager:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1 ks-apiserver:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0 notification-manager:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0 notification-manager-operator:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0 thanos:v0.31.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1 prometheus:v2.39.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0 kube-state-metrics:v2.6.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 prometheus-config-reloader:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 prometheus-operator:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 node-exporter:v1.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 kubectl:v1.22.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 notification-tenant-sidecar:v3.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 alertmanager:v0.23.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 kube-rbac-proxy:v0.11.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 docker:19.03
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 snapshot-controller:v4.0.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 busybox:1.31.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 defaultbackend-amd64:1.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1 configmap-reload:v0.7.1
2.7 导出保存镜像
mkdir ks3.4.1-images
cd ks3.4.1-images
vim save-images.sh
#!/bin/bash
# k8s 1.25 变化的版本
docker save -o kube-apiserver:v1.25.16.tar kube-apiserver:v1.25.16
docker save -o kube-controller-manager:v1.25.16.tar kube-controller-manager:v1.25.16
docker save -o kube-scheduler:v1.25.16.tar kube-scheduler:v1.25.16
docker save -o kube-proxy:v1.25.16.tar kube-proxy:v1.25.16
docker save -o coredns:1.9.3.tar coredns:1.9.3
docker save -o pause:3.8.tar pause:3.8
#ks3.4.1和未变化版本
docker save -o kube-controllers:v3.27.3.tar kube-controllers:v3.27.3
docker save -o cni:v3.27.3.tar cni:v3.27.3
docker save -o pod2daemon-flexvol:v3.27.3.tar pod2daemon-flexvol:v3.27.3
docker save -o node:v3.27.3.tar node:v3.27.3
docker save -o haproxy:2.9.6-alpine.tar haproxy:2.9.6-alpine
docker save -o ks-installer:v3.4.1.tar ks-installer:v3.4.1
docker save -o ks-console:v3.4.1.tar ks-console:v3.4.1
docker save -o ks-controller-manager:v3.4.1.tar ks-controller-manager:v3.4.1
docker save -o ks-apiserver:v3.4.1.tar ks-apiserver:v3.4.1
docker save -o notification-manager:v2.3.0.tar notification-manager:v2.3.0
docker save -o notification-manager-operator:v2.3.0.tar notification-manager-operator:v2.3.0
docker save -o thanos:v0.31.0.tar thanos:v0.31.0
docker save -o opensearch:2.6.0.tar opensearch:2.6.0
docker save -o k8s-dns-node-cache:1.22.20.tar k8s-dns-node-cache:1.22.20
docker save -o prometheus:v2.39.1.tar prometheus:v2.39.1
docker save -o kube-state-metrics:v2.6.0.tar kube-state-metrics:v2.6.0
docker save -o provisioner-localpv:3.3.0.tar provisioner-localpv:3.3.0
docker save -o linux-utils:3.3.0.tar linux-utils:3.3.0
docker save -o prometheus-config-reloader:v0.55.1.tar prometheus-config-reloader:v0.55.1
docker save -o prometheus-operator:v0.55.1.tar prometheus-operator:v0.55.1
docker save -o node-exporter:v1.3.1.tar node-exporter:v1.3.1
docker save -o kubectl:v1.22.0.tar kubectl:v1.22.0
docker save -o notification-tenant-sidecar:v3.2.0.tar notification-tenant-sidecar:v3.2.0
docker save -o alertmanager:v0.23.0.tar alertmanager:v0.23.0
docker save -o kube-rbac-proxy:v0.11.0.tar kube-rbac-proxy:v0.11.0
docker save -o docker:19.03.tar docker:19.03
docker save -o snapshot-controller:v4.0.0.tar snapshot-controller:v4.0.0
docker save -o busybox:1.31.1.tar busybox:1.31.1
docker save -o defaultbackend-amd64:1.4.tar defaultbackend-amd64:1.4
docker save -o configmap-reload:v0.7.1.tar configmap-reload:v0.7.1
编写推送脚本 load-push.sh
#!/bin/bash
#
FILES=$(find . -type f \( -iname "*.tar" -o -iname "*.tar.gz" \) -printf '%P\n' | grep -E ".tar$|.tar.gz$")
Harbor="dockerhub.kubekey.local"
docker login -u admin -p Harbor12345 ${Harbor}
echo "--------[Login Harbor succeed]--------"
# 遍历所有 ".tar" 或 ".tar.gz" 文件,逐个加载 Docker 镜像
for file in ${FILES}
do
echo "--------[Loading Docker image from $file]--------"
docker load -i "$file" > loadimages
IMAGE=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1`
echo "--------[$IMAGE]--------"
docker push $IMAGE
done
echo "--------[All Docker images push successfully]--------"
压缩k8s和ks镜像
cd ..
tar -czvf ks3.4.1-images.tar.gz ks3.4.1-images
至此,离线部署包制作完成,精简后的包大小1.3GB
-
离线安装集群
3.1 移除麒麟系统自带的podman
podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。所有节点执行
yum remove podman
3.2 将安装包拷贝至离线环境
将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。
3.3 安装k8s依赖包
所有节点执行,上传k8s-init-KylinV10.tar.gz解压后执行install.sh
3.4 修改config-sample.yaml配置文件
修改相关节点和harbor信息
-
必须指定
registry
仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。 -
registry
里指定 不再指定type
类型为harbor
,默认安装 docker registry,harbor官方不支持arm。需要安装的话可以自行安装或者部署完ks后(卸载docker registry)再安装
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456"}
roleGroups:
etcd:
- node1 # All the nodes in your cluster that serve as the etcd nodes.
master:
- node1
# - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
worker:
- node1
registry:
- node1
controlPlaneEndpoint:
# Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
system:
ntpServers:
- node1 # 所有节点同步node1时间.
timezone: "Asia/Shanghai"
kubernetes:
version: v1.25.16
containerManager: docker
clusterName: cluster.local
# Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
autoRenewCerts: true
# maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
maxPods: 210
etcd:
type: kubekey
## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
# external:
# endpoints:
# - https://192.168.6.6:2379
# caFile: /pki/etcd/ca.crt
# certFile: /pki/etcd/etcd.crt
# keyFile: /pki/etcd/etcd.key
dataDir: "/var/lib/etcd"
heartbeatInterval: 250
electionTimeout: 5000
snapshotCount: 10000
autoCompactionRetention: 8
metrics: basic
quotaBackendBytes: 2147483648
maxRequestBytes: 1572864
maxSnapshots: 5
maxWals: 5
logLevel: info
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
multusCNI:
enabled: false
storage:
openebs:
basePath: /var/openebs/local # base path of the local PV provisioner
registry:
type: harbor
registryMirrors: []
insecureRegistries: []
privateRegistry: "dockerhub.kubekey.local"
namespaceOverride: "kubesphereio"
auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
"dockerhub.kubekey.local":
username: "admin"
password: Harbor12345
skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
plainHTTP: false # Allow contacting registries over HTTP.
certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.4.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: true
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 31688
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
opensearch:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
enabled: true
logMaxAge: 7
opensearchPrefix: whizard
basicAuth:
enabled: true
username: "admin"
password: "admin"
externalOpensearchHost: ""
externalOpensearchPort: ""
dashboard:
enabled: false
alerting:
enabled: true
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600
3.5 使用制品安装私有仓库
./kk init registry -f config-sample.yaml -a ks3.4-artifact.tar.gz
3.6 推送镜像
解压2.7部分的镜像包,后执行
./load-push.sh
3.7 安装k8s
此处不再增加参数 -a ks3.4-artifact.tar.gz,因为再上一步创建harbor时,已经将artifact制品解压提取。此处再加-a参数,如果没有下载镜像或者iso有问题会报错。
./kk create cluster -f config-sample.yaml
等待大概十几分钟,看到成功消息
3.8 验证
基础组件运行正常
-
总结
本篇只安装核心组件确保k8s和ks的运行,并使用docker registry作为私有仓库。如果需要其他组件和harbor可参考上一篇文章自行安装。
标签:beijing,精简版,V10,离线,registry,kubesphereio,docker,com,cn From: https://www.cnblogs.com/tianxing1st/p/18455125