首页 > 系统相关 >鲲鹏(arm64)+麒麟V10离线部署KubeSphere3.4.1(精简版 离线包Windows制作)

鲲鹏(arm64)+麒麟V10离线部署KubeSphere3.4.1(精简版 离线包Windows制作)

时间:2024-10-09 21:03:47浏览次数:9  
标签:beijing 精简版 V10 离线 registry kubesphereio docker com cn

  1. 前提条件

  1. 开始制作

2.1 创建目录

进入E:\KubeSphere后打开终端(cmd),输入wsl后进入子系统,创建arm目录

2.2 下载kk

  • 方式一
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm# export KKZONE=cn
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm#  curl -sfL https://get-kk.kubesphere.io | VERSION=v3.1.5 sh -

Downloading kubekey v3.1.5 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.1.5/kubekey-v3.1.5-linux-amd64.tar.gz ...


Kubekey v3.1.5 Download Complete!
root@DESKTOP-BB0KRFQ:/mnt/e/KubeSphere/arm# ls
kk  kubekey-v3.1.5-linux-amd64.tar.gz
  • 方式二

使用本地电脑,直接去github下载 Releases · kubesphere/kubekey

上传至服务器/root/kubesphere目录解压

tar zxf kubekey-v3.1.5-linux-amd64.tar.gz

本地Windows使用amd版本kk,实际部署时使用arm64版本,所以还需要手动下载kubekey-v3.1.5-linux-arm64.tar.gz

2.3 编辑制品配置文件

在使用官方文档示例生成制品时出现了各种镜像错误,这里不再下载镜像(旧版本kk需要下载最少一个镜像)。镜像通过编写shell脚本处理。操作系统的iso也不再下载,使用第一步制作的依赖包。

优势

  • 制品体积更小

  • 镜像变动更灵活

  • 组件按需增加/减少

劣势

  • 额外编写更多脚本

  • 额外增加离线部署过程

vim manifest-kylin.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: sample
spec:
  arches:
  - arm64
  operatingSystems:
  - arch: arm64
    type: linux
    id: kylin
    version: "V10"
    osImage: Kylin Linux Advanced Server V10
    repository:
      iso:
        localPath:
        url:
  kubernetesDistributions:
  - type: kubernetes
    version: v1.25.16
  components:
    helm:
      version: v3.14.3
    cni:
      version: v1.2.0
    etcd:
      version: v3.5.13
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    calicoctl:
      version: v3.27.3
    crictl:
      version: v1.29.0
    docker-registry:
      version: "2"

备注:这里使用docker-registry作为仓库,如果需要harbor可采取使用该方式部署完成后另外安装harbor,也可参考之前文章,直接装harbor鲲鹏(arm64)+麒麟(kylin v10)离线部署kubesphere(含离线部署新方式)

2.4 导出离线文件

export KKZONE=cn
./kk artifact export -m manifest-kylin.yaml -o ks3.4-artifact.tar.gz

导出完成

2.5 手动拉取k8s相关镜像

vim pull-images.sh
#!/bin/bash

# k8s 1.25 变化的版本
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.25.16
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.25.16
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.25.16
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.25.16
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.8
#
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
docker pull --platform=linux/arm64  registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1

查看下载情况

2.6 重命名镜像

vim tag-images.sh

根据自己docker-regisrty/harbor仓库名称修改harbor地址和项目名称

#!/bin/bash

# k8s 1.25 变化的版本
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.25.16  kube-apiserver:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.25.16  kube-controller-manager:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.25.16  kube-scheduler:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.25.16  kube-proxy:v1.25.16
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3 coredns:1.9.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.8  pause:3.8
#ks3.4.1和未变化版本
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3  kube-controllers:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3  cni:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3  pod2daemon-flexvol:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3  node:v3.27.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine  haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1  ks-installer:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1  ks-console:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1  ks-controller-manager:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1  ks-apiserver:v3.4.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0  notification-manager:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0  notification-manager-operator:v2.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0  thanos:v0.31.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20  k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1  prometheus:v2.39.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0  kube-state-metrics:v2.6.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0  provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0  linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1  prometheus-config-reloader:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1  prometheus-operator:v0.55.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1  node-exporter:v1.3.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0  kubectl:v1.22.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0  notification-tenant-sidecar:v3.2.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0  alertmanager:v0.23.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0  kube-rbac-proxy:v0.11.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03  docker:19.03
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0  snapshot-controller:v4.0.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1  busybox:1.31.1
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4  defaultbackend-amd64:1.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1  configmap-reload:v0.7.1

2.7 导出保存镜像

mkdir ks3.4.1-images
cd ks3.4.1-images
vim save-images.sh
#!/bin/bash
# k8s 1.25 变化的版本
docker save -o kube-apiserver:v1.25.16.tar  kube-apiserver:v1.25.16
docker save -o kube-controller-manager:v1.25.16.tar  kube-controller-manager:v1.25.16
docker save -o kube-scheduler:v1.25.16.tar  kube-scheduler:v1.25.16
docker save -o kube-proxy:v1.25.16.tar  kube-proxy:v1.25.16
docker save -o coredns:1.9.3.tar  coredns:1.9.3
docker save -o pause:3.8.tar  pause:3.8
#ks3.4.1和未变化版本
docker save -o kube-controllers:v3.27.3.tar  kube-controllers:v3.27.3
docker save -o cni:v3.27.3.tar  cni:v3.27.3
docker save -o pod2daemon-flexvol:v3.27.3.tar  pod2daemon-flexvol:v3.27.3
docker save -o node:v3.27.3.tar  node:v3.27.3
docker save -o haproxy:2.9.6-alpine.tar  haproxy:2.9.6-alpine
docker save -o ks-installer:v3.4.1.tar  ks-installer:v3.4.1
docker save -o ks-console:v3.4.1.tar  ks-console:v3.4.1
docker save -o ks-controller-manager:v3.4.1.tar  ks-controller-manager:v3.4.1
docker save -o ks-apiserver:v3.4.1.tar  ks-apiserver:v3.4.1
docker save -o notification-manager:v2.3.0.tar  notification-manager:v2.3.0
docker save -o notification-manager-operator:v2.3.0.tar  notification-manager-operator:v2.3.0
docker save -o thanos:v0.31.0.tar  thanos:v0.31.0
docker save -o opensearch:2.6.0.tar  opensearch:2.6.0
docker save -o k8s-dns-node-cache:1.22.20.tar  k8s-dns-node-cache:1.22.20
docker save -o prometheus:v2.39.1.tar  prometheus:v2.39.1
docker save -o kube-state-metrics:v2.6.0.tar  kube-state-metrics:v2.6.0
docker save -o provisioner-localpv:3.3.0.tar  provisioner-localpv:3.3.0
docker save -o linux-utils:3.3.0.tar  linux-utils:3.3.0
docker save -o prometheus-config-reloader:v0.55.1.tar  prometheus-config-reloader:v0.55.1
docker save -o prometheus-operator:v0.55.1.tar  prometheus-operator:v0.55.1
docker save -o node-exporter:v1.3.1.tar  node-exporter:v1.3.1
docker save -o kubectl:v1.22.0.tar  kubectl:v1.22.0
docker save -o notification-tenant-sidecar:v3.2.0.tar  notification-tenant-sidecar:v3.2.0
docker save -o alertmanager:v0.23.0.tar  alertmanager:v0.23.0
docker save -o kube-rbac-proxy:v0.11.0.tar  kube-rbac-proxy:v0.11.0
docker save -o docker:19.03.tar  docker:19.03
docker save -o snapshot-controller:v4.0.0.tar  snapshot-controller:v4.0.0
docker save -o busybox:1.31.1.tar  busybox:1.31.1
docker save -o defaultbackend-amd64:1.4.tar  defaultbackend-amd64:1.4
docker save -o configmap-reload:v0.7.1.tar  configmap-reload:v0.7.1

编写推送脚本 load-push.sh

#!/bin/bash
#
FILES=$(find . -type f \( -iname "*.tar"  -o -iname "*.tar.gz"  \) -printf '%P\n' | grep -E ".tar$|.tar.gz$")

Harbor="dockerhub.kubekey.local"

docker login -u admin -p Harbor12345 ${Harbor}
echo "--------[Login Harbor succeed]--------"

# 遍历所有 ".tar" 或 ".tar.gz" 文件,逐个加载 Docker 镜像
for file in ${FILES}
do
    echo "--------[Loading Docker image from $file]--------"
    docker load -i "$file" > loadimages
    IMAGE=`cat loadimages | grep 'Loaded image:' | awk '{print $3}' | head -1`
    echo "--------[$IMAGE]--------"
    docker push $IMAGE
done
echo "--------[All Docker images push successfully]--------"

压缩k8s和ks镜像

cd ..
tar -czvf ks3.4.1-images.tar.gz ks3.4.1-images

至此,离线部署包制作完成,精简后的包大小1.3GB

  1. 离线安装集群

3.1 移除麒麟系统自带的podman

podman是麒麟系统自带的容器引擎,为避免后续与docker冲突,直接卸载。否则后续coredns/nodelocaldns也会受影响无法启动以及各种docker权限问题。所有节点执行

yum remove podman

3.2 将安装包拷贝至离线环境

将下载的 KubeKey 、制品 artifact 、脚本和导出的镜像通过 U 盘等介质拷贝至离线环境安装节点。

3.3 安装k8s依赖包

所有节点执行,上传k8s-init-KylinV10.tar.gz解压后执行install.sh

3.4 修改config-sample.yaml配置文件

修改相关节点和harbor信息

  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。

  • registry 里指定 不再指定type 类型为 harbor,默认安装 docker registry,harbor官方不支持arm。需要安装的话可以自行安装或者部署完ks后(卸载docker registry)再安装

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.200.7, internalAddress: "192.168.200.7", user: root, password: "123456"}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
      #  - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    registry:
    - node1
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""      
    port: 6443
  system:
    ntpServers:
      - node1 # 所有节点同步node1时间.
    timezone: "Asia/Shanghai"

  kubernetes:
    version: v1.25.16
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 210
  etcd:
    type: kubekey  
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    heartbeatInterval: 250
    electionTimeout: 5000
    snapshotCount: 10000
    autoCompactionRetention: 8
    metrics: basic
    quotaBackendBytes: 2147483648 
    maxRequestBytes: 1572864
    maxSnapshots: 5
    maxWals: 5
    logLevel: info
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    multusCNI:
      enabled: false
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    type: harbor
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: "dockerhub.kubekey.local"
    namespaceOverride: "kubesphereio"
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "admin"
        password: Harbor12345
        skipTLSVerify: true # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 31688
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

3.5 使用制品安装私有仓库

./kk init registry -f config-sample.yaml -a ks3.4-artifact.tar.gz

3.6 推送镜像

解压2.7部分的镜像包,后执行

./load-push.sh

3.7 安装k8s

此处不再增加参数 -a ks3.4-artifact.tar.gz,因为再上一步创建harbor时,已经将artifact制品解压提取。此处再加-a参数,如果没有下载镜像或者iso有问题会报错。

./kk create cluster -f config-sample.yaml

等待大概十几分钟,看到成功消息

3.8 验证

基础组件运行正常

  1. 总结

本篇只安装核心组件确保k8s和ks的运行,并使用docker registry作为私有仓库。如果需要其他组件和harbor可参考上一篇文章自行安装。

标签:beijing,精简版,V10,离线,registry,kubesphereio,docker,com,cn
From: https://www.cnblogs.com/tianxing1st/p/18455125

相关文章

  • 一个查询IP地理信息和CDN提供商的离线终端工具
    一个查询IP地理信息和CDN提供商的离线终端工具Nali功能支持多种数据库纯真IPv4离线数据库ZXIPv6离线数据库Geoip2城市数据库(可选)IPIP数据库(可选)ip2region数据库(可选)DB-IP数据库(可选)IP2LocationDB3LITE数据库(可选)CDN服务提供商查询支持......
  • vscode 远程 linux(包括离线vscode-server安装,免密登录方法)
    vscode远程linux(包括离线vscode-server安装,免密登录方法)本教程前提是安装并配置好ssh服务1.vscode安装安装远程所需扩展及配置1.1安装扩展在vscode扩展中搜索Remote-SSH,下载安装1.2通过ssh远程连接1.2.1通过ssh连接命令连接在vscode中依次点击远程资......
  • 基于yolov10的花卉识别检测,支持图像、视频和摄像实时检测【pytorch框架、python】
    更多目标检测和图像分类识别项目可看我主页其他文章功能演示:基于yolov10的花卉识别检测系统,支持图像、视频和摄像实时检测【pytorch框架、python】_哔哩哔哩_bilibili(一)简介基于yolov10的花卉识别检测系统是在pytorch框架下实现的,这是一个完整的项目,包括代码,数据集,训练好的......
  • 离线汉化stable-diffusion-webui界面
    1.从Stable-diffusion-webui的汉化扩展下载汉化语言包.2.进入下载好的文件夹,把"localizations"文件夹内的"chinese-and-english-0313.json"和"chinese-only-0313.json"复制到"stable-diffusion-webui\localizations"目录下.3 点击"Settings",左侧点击"U......
  • 推荐!专业Substance 3D Painter v10.解锁版下载及安装 (3D绘画软件)
    AdobeSubstance3DPainter简称Pt,是一款由adobe公司新研发的3D绘画软件。Substance3DPainter具有前所未有的功能和工作流程改进,使为3D资产创建纹理变得比以往更容易。具体安装方式如下:下载地址:Substance3DPainterv10.解锁版下载1、解压后点击如下图运行2、选择安装......
  • Deformable DETR改进|爆改模型|涨点|在骨干网络和可变形编码器间加入YOLOv10的PSA和SC
    一、文本介绍本文修改的模型是Deformable-DETR,在骨干网络和可变形编码器之间加入YOLOv10的PSA和SCDown模块。其中PSA是YOLOv10提出的一种高效的自注意力模块,为了避免注意力带来的巨额开销,本文将PSA应用于可变形编码器输入的最高层级特征图。SCConv是一种空间和通道解耦的卷积......
  • YOLOv10最全详细翻译【人工校对版】
    ......
  • 构建-Angular-离线应用-全-
    构建Angular离线应用(全)原文:BuildingOfflineApplicationswithAngular协议:CCBY-NC-SA4.0一、构建现代Web应用欢迎光临!恭喜你选择了这本书来学习如何用Angular构建离线应用。这一介绍性的章节为这本书设定了期望和框架。它简要介绍了传统的web应用开发,以及为什么......
  • YOLOv10/8算法改进【NO.139】借鉴RCS-YOLO算法改进
     前  言    YOLO算法改进系列出到这,很多朋友问改进如何选择是最佳的,下面我就根据个人多年的写作发文章以及指导发文章的经验来看,按照优先顺序进行排序讲解YOLO算法改进方法的顺序选择。具体有需求的同学可以私信我沟通:首推,是将两种最新推出算法的模块进行融合形成......
  • 基于深度学习的跌倒检测系统:YOLOv5/v6/v7/v8/v10模型实现与UI界面集成、数据集
    1.引言跌倒检测是一个重要的研究领域,尤其在老年人和病人监护中,及时检测并响应跌倒事件可以大大减少伤害和死亡的风险。本博客将介绍如何构建一个基于深度学习的跌倒检测系统,使用YOLOv5进行目标检测,并设计一个用户界面(UI)来实时监控和反馈。本文将详细描述系统的各个组成部分......