Helm部署Harbor,实现高可用的镜像仓库(超详细分享)
前言:从业务场景看Harbor部署
我在前面的文章中介绍了离线安装、在线安装等Harbor的部署方式,但其缺点都是无法做高可用,在实际的业务场景中一旦Harbor服务器异常,将会造成很大的影响。
对应前面的几种部署方式,官方也并没有给出高可用的支持方案,如果要支持,则需要对Harbor有一定程度上的了解。
对于Harbor的高可用方案,可将Harbor部署在kubernetes集群中,利用其 特点即可实现Harbor的高可用。
1、部署说明
1台NFS服务器(可使用kubernetes集群任一节点做NFS服务端)
1个kubernetes集群(1master,2node组成的3节点集群)
操作系统: CentOS-7.8
Harbor版本:2.4.2
2、安装Helm
helm是命令行客户端工具,主要用于 Kubernetes 应用中的 chart 的创建、打包、发布和管理。
在kubernetes集群master节点安装
2.1 下载helm二进制包
$ wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz $ tar zxvf helm-v3.7.2-linux-amd64.tar.gz $ cd linux-amd64/ $ cp helm /usr/local/bin/
2.2 查看helm版本
$ helm version version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
2.4 下载Chart包到本地
因为需要修改的参数比较多,在命令行直接helm install比较复杂,我就将Chart包下载到本地,再修改一些配置,这样比较直观,也比较符合实际工作中的业务环境
1 $ helm search repo harbor # 搜索chart包 2 NAME CHART VERSION APP VERSION DESCRIPTION 3 harbor/harbor 1.8.2 2.4.2 An open source trusted cloud native registry th... 4 $ helm pull harbor/harbor # 下载Chart包 5 $ tar zxvf harbor-1.8.2.tgz # 解压包
3、创建命名空间
创建harbor命名空间,将Harbor相关的服务都部署在该命名空间中。
$ kubectl create namespace harbor
4、创建NFS外部供应商
本处使用NFS为存储,需要提供外部供应商,如果你有该供应商,可跳过本步骤。
1 $ yum install -y nfs-utils 2 $ systemctl start nfs && systemctl enable nfs 3 $ systemctl status nfs 4 $ chkconfig nfs on #设置为开机自启 5 注意:正在将请求转发到“systemctl enable nfs.service”。 6 $ mkdir -p /data/nfs/harbor #创建共享目录 7 $ cat /etc/exports 8 /data/nfs/harbor 10.0.0.0/24(rw,no_root_squash) 9 $ exportfs -arv # 使配置文件生效 10 exporting 10.0.0.0/24:/data/v1 11 $ systemctl restart nfs 12 $ showmount -e localhost #检查共享目录信息 13 Export list for localhost: 14 /data/nfs/harbor 10.0.0.0/24
4.2 安装客户端
本处客户端即为kubernets集群的每一个节点,若Pod调度到的节点没有该服务,则无法使用对应的存储卷。
1 $ yum -y install nfs-utils 2 $ systemctl start nfs-utils && systemctl enable nfs-utils 3 $ systemctl status nfs-utils
4.3 创建运行的sa账号并做RBAC授权
$ vim nfs-provisioner.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner namespace: harbor --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-provisioner-cr rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: harbor roleRef: kind: ClusterRole name: nfs-provisioner-cr apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: nfs-role namespace: harbor rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get","list","watch","create","update","patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-provisioner namespace: harbor subjects: - kind: ServiceAccount name: nfs-provisioner namespace: harbor roleRef: kind: Role name: nfs-role apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-proversitioner namespace: harbor spec: selector: matchLabels: app: nfs-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: example.com/nfs - name: NFS_SERVER value: 192.168.2.212 - name: NFS_PATH value: /data/nfs/harbor volumes: - name: nfs-client-root nfs: server: 192.168.2.212 # NFS服务端地址 path: /data/nfs/harbor # NFS共享目录vim nfs-provisioner.yaml
部署Deployment使用的镜像是从阿里云拉取的。
创建资源对象:
kubectl apply -f nfs-provisioner.yaml kubectl -n harbor get pod nfs-proversitioner-6d66969dbf-n75g5 1/1 Running 0 28m
5、创建存储类(StorageClass)
Harbor的database和redis组件是为有状态服务,需要对Harbor数据做持久化存储。
本处基于NFS创建StorageClass存储类,使用NFS外部供应商可阅读我主页文章,NFS服务器和共享目录为:
- NFS服务器地址:10.0.8.16
- NFS共享目录:/data/nfs/harbor
$ vim harbor-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: harbor-storageclass namespace: harbor provisioner: example.com/nfs # 指定外部存储供应商 $ kubectl apply -f harbor-storageclass.yaml $ kubectl -n harbor get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE harbor-storageclass example.com/nfs Delete Immediate false 5s
6、修改values.yaml配置
重点!重点!重点! 踩坑最多的地方来了!
$ cd harbor $ ls cert Chart.yaml conf LICENSE README.md templates values.yaml $ vim values.yaml expose: type: nodePort # 我这没有Ingress环境,使用NodePort的服务访问方式。 tls: enabled: false # 关闭tls安全加密认证(如果开启需要配置证书) ... externalURL: http://192.168.2.11:30002 # 使用nodePort且关闭tls认证,则此处需要修改为http协议和expose.nodePort.ports.http.nodePort指定的端口号,IP即为kubernetes的节点IP地址 # 持久化存储配置部分 persistence: enabled: true # 开启持久化存储 resourcePolicy: "keep" persistentVolumeClaim: # 定义Harbor各个组件的PVC持久卷部分 registry: # registry组件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" # 前面创建的StorageClass,其它组件同样配置 subPath: "" accessMode: ReadWriteMany # 卷的访问模式,需要修改为ReadWriteMany,允许多个组件读写,否则有的组件无法读取其它组件的数据 size: 5Gi chartmuseum: # chartmuseum组件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" subPath: "" accessMode: ReadWriteMany size: 5Gi jobservice: # 异步任务组件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" #修改,同上 subPath: "" accessMode: ReadWriteOnce size: 1Gi database: # PostgreSQl数据库组件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" subPath: "" accessMode: ReadWriteMany size: 1Gi redis: # Redis缓存组件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" subPath: "" accessMode: ReadWriteMany size: 1Gi trivy: # Trity漏洞扫描插件(持久卷)配置部分 existingClaim: "" storageClass: "harbor-storageclass" subPath: "" accessMode: ReadWriteMany size: 5Gi ... harborAdminPassword: "Harbor12345" # admin初始密码,不需要修改 ... metrics: enabled: true # 是否启用监控组件(可以使用Prometheus监控Harbor指标,具体见本专栏文章),非必须操作 core: path: /metrics port: 8001 registry: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 exporter: path: /metrics port: 8001 ###以下的trace为2.4版本的功能,不需要修改。
扩展:
如果不希望安装最新的版本,可以通过以下命令修改镜像版本号来安装指定的版本。
$ helm install harbor . -n harbor # 将安装资源部署到harbor命名空间 NAME: harbor LAST DEPLOYED: Mon Apr 11 NAMESPACE: harbor STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Please wait for several minutes for Harbor deployment to complete. Then you should be able to visit the Harbor portal at http://10.0.24.7:30002 For more details, please visit https://github.com/goharbor/harbor
-n前有个点,表示当前路径。
8、服务验证
安装完成后,需要验证相关组件的服务是否正常!
[root@master ~]# kubectl -n harbor get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES harbor-core-599b9c79c-g5sxf 1/1 Running 0 24m 10.244.2.59 node2 <none> <none> harbor-database-0 1/1 Running 0 24m 10.244.2.61 node2 <none> <none> harbor-jobservice-77ddddfb75-4rxps 1/1 Running 3 24m 10.244.1.55 node1 <none> <none> harbor-nginx-74b65cbf94-sr59j 1/1 Running 0 24m 10.244.1.56 node1 <none> <none> harbor-portal-7cfc7dc6f9-dkxwd 1/1 Running 0 24m 10.244.1.54 node1 <none> <none> harbor-redis-0 1/1 Running 0 24m 10.244.2.60 node2 <none> <none> harbor-registry-556f9dcbb7-8kwq2 2/2 Running 0 24m 10.244.2.58 node2 <none> <none> harbor-trivy-0 1/1 Running 0 24m 10.244.1.57 node1 <none> <none> nfs-proversitioner-6d66969dbf-n75g5 1/1 Running 0 31m 10.244.1.53 node1 <none> <none>
9、登录Harbor UI界面
查看组件服务
[root@master ~]# kubectl -n harbor get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE harbor NodePort 10.105.233.254 <none> 80:30002/TCP 25m harbor-core ClusterIP 10.96.36.235 <none> 80/TCP 25m harbor-database ClusterIP 10.100.119.167 <none> 5432/TCP 25m harbor-jobservice ClusterIP 10.109.113.7 <none> 80/TCP 25m harbor-portal ClusterIP 10.101.111.123 <none> 80/TCP 25m harbor-redis ClusterIP 10.103.43.191 <none> 6379/TCP 25m harbor-registry ClusterIP 10.97.39.105 <none> 5000/TCP,8080/TCP 25m harbor-trivy ClusterIP 10.96.77.232 <none> 8080/TCP 25m
使用kubernetes任一节点主机IP和30002端口即可访问UI管理界面。
标签:NFS,name,Harbor,harbor,nfs,provisioner,Helm,镜像 From: https://www.cnblogs.com/nb-blog/p/17917327.html