在 “Docker基础知识 (21) - Kubernetes(四) | 在 K8s 集群上部署 NFS 实现共享存储 (1)” 里我们演示如何在 K8s 集群中部署 NFS 和创建静态 PV/PVC,本文将继续演示如何创建动态 PV/PVC。
Kubernetes 的共享存储详细介绍,请参考 “系统架构与设计(7)- Kubernetes 的共享存储”。
NFS (Network File System),即网络文件系统,是 FreeBSD 支持的文件系统中的一种。NFS 允许一个系统在网络上与它人共享目录和文件。通过使用 NFS,用户和程序可以像访问本地文件一样访问远端系统上的文件。
1. 部署环境
虚拟机: Virtual Box 6.1.30(Windows 版)
操作系统: Linux CentOS 7.9 64位
Docker 版本:20.10.7
Docker Compose 版本:2.6.1
Kubernetes 版本:1.23.0
工作目录:/home/k8s
Linux 用户:非 root 权限用户 (用户名自定义,这里以 xxx 表示),属于 docker 用户组,
1) 主机列表
主机名 IP 角色 操作系统 k8s-master 192.168.0.10 master CentOS 7.9 k8s-node01 192.168.0.11 node CentOS 7.9
2. NFS Server 配置
# 修改共享目录
$ sudo vim /etc/exports
# 共享目录
/home/k8s/share 192.168.0.0/16(rw,sync,all_squash,anonuid=1000,anongid=1000)
# 重启 nfs 服务
$ sudo systemctl restart nfs
# 查看服务端里面可以挂载的目录
$ showmount -e 192.168.0.10
Export list for 192.168.0.10:
/home/k8s/share 192.168.0.0/16
3. 部署 nfs-subdir-external-provisioner
nfs-subdir-external-provisioner 是一个 Kubernetes 的简易 NFS 的外部 provisioner,本身不提供 NFS,需要现有的 NFS 服务器提供。
GitHub:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
1) 下载 nfs-subdir-external-provisioner
$ cd /home/k8s
$ mkdir nfs-subdir-external-provisioner
$ cd nfs-subdir-external-provisioner
$ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
$ cp -R nfs-subdir-external-provisioner/deploy ./
注:使用 Git 下载,把 nfs-subdir-external-provisioner/deploy 目录复制到 /home/k8s/nginx-test/nfs-subdir-external-provisioner 目录下。
或到 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 页面下载 zip 包,zip 包解开后的目录是 nfs-subdir-external-provisioner-master,复制 deploy 目录步骤和上面一样。
本文下载的版本是 4.0.2。
2) 部署 rbac.yaml 文件
$ cd /home/k8s/nfs-subdir-external-provisioner/deploy
$ cat rbac.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
# 执行创建命令
$ kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
# 查看 ServiceAccount,可以运行 kubectl get sa
$ kubectl get ServiceAccount
NAME SECRETS AGE default 1 5d10h nfs-client-provisioner 1 82s
$ kubectl get ClusterRole nfs-client-provisioner-runner
NAME CREATED AT nfs-client-provisioner-runner 2022-11-22T11:39:51Z
$ kubectl get ClusterRoleBinding run-nfs-client-provisioner
NAME ROLE AGE run-nfs-client-provisioner ClusterRole/nfs-client-provisioner-runner 3m45s
$ kubectl get Role leader-locking-nfs-client-provisioner
NAME CREATED AT leader-locking-nfs-client-provisioner 2022-11-21T20:16:45Z
$ kubectl get RoleBinding leader-locking-nfs-client-provisioner
NAME ROLE AGE leader-locking-nfs-client-provisioner Role/leader-locking-nfs-client-provisioner 15h
3) 修改 deployment.yaml 文件
$ cd /home/k8s/nfs-subdir-external-provisioner/deploy
$ vim deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 image: registry.cn-hangzhou.aliyuncs.com/weiyigeek/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.0.10 # NFS server - name: NFS_PATH value: /home/k8s/share # NFS 共享目录 volumes: - name: nfs-client-root nfs: server: 192.168.0.10 # NFS server path: /home/k8s/share # NFS 共享目录
注:修改镜像来源为 registry.cn-hangzhou.aliyuncs.com/weiyigeek/nfs-subdir-external-provisioner:v4.0.2,修改 NFS server和共享目录 。
# 执行创建命令
$ kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
# 查看 Pod
$ kubectl get pod
NAME READY STATUS RESTARTS AGE nfs-client-provisioner-d66c499b4-6wxsh 1/1 Running 0 2m42s
# 查看 Pod 日志 (运行状态)
$ kubectl logs nfs-client-provisioner-d66c499b4-6wxsh
I1122 13:00:23.794534 1 leaderelection.go:242] attempting to acquire leader lease nginx-test/k8s-sigs.io-nfs-subdir-external-provisioner... I1122 13:00:23.812141 1 leaderelection.go:252] successfully acquired lease nginx-test/k8s-sigs.io-nfs-subdir-external-provisioner I1122 13:00:23.812409 1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec! I1122 13:00:23.812757 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"nginx-test", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"35a92d4a-34c7-4448-8f60-03e743ece267", APIVersion:"v1", ResourceVersion:"338177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec became leader I1122 13:00:23.912953 1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec!
注:已处于正常运行状态。
4. 创建动态 PV/PVC
1) 修改 class.yaml (StorageClass)文件
$ cd /home/k8s/nfs-subdir-external-provisioner/deploy
$ vim class.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client-storage-class provisioner: nfs-client # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
注:修改 metadata.name:nfs-client-storage-class 。
# 执行创建命令
$ kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client-storage-class created
# 查看 StorageClass,可以运行 kubectl get sc
$ kubectl get StorageClass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ... nfs-client-storage-class k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate
2) 修改 test-claim.yaml 文件
$ cd /home/k8s/nfs-subdir-external-provisioner/deploy
$ vim test-claim.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-claim spec: storageClassName: nfs-client-storage-class accessModes: - ReadWriteMany resources: requests: storage: 1Mi
注:修改 spec.storageClassName: nfs-client-storage-class 。
# 执行创建命令
$ kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
# 查看 PVC
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS... STORAGECLASS AGE test-claim Bound pvc-8c2f1413-d0b5-47f7-93d9-83c23ad9119b 1Mi RWX nfs-client-storage-class 13s
# 查看 PV
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a 1Mi RWX Delete Bound default/test-claim nfs-client-storage-class 25s
3) 修改 test-pod.yaml 文件
$ cd /home/k8s/nfs-subdir-external-provisioner/deploy
$ cat test-pod.yaml
kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox:stable command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim
# 执行创建命令
$ kubectl apply -f test-pod.yaml
pod/test-pod created
# 查看 Pod
$ kubectl get pod
NAME READY STATUS RESTARTS AGE nfs-client-provisioner-d66c499b4-tr2pt 1/1 Running 0 18m test-pod 0/1 Completed 0 5m18s
# 查看 master 上 NFS 共享目录
$ cd /home/k8s/share
$ ls -la
total 0 drwxrwxr-x 3 xxx xxx 73 Nov 22 08:40 . drwxr-xr-x 5 xxx xxx 125 Nov 22 08:31 .. drwxrwxrwx 2 xxx xxx 21 Nov 22 08:43 default-test-claim-pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a
注:NFS Provisioner 自动生成了 default-test-claim-pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a 目录,在目录下可以看到已经生成 SUCCESS 文件。
NFS Provisioner 创建的目录命名方式为 “namespace名称-pvc名称-pv名称”,pv 名称是随机字符串,所以每次只要不删除 PVC,那么 K8s 中的与存储绑定将不会丢失。
动态 PV 创建的目录,是 NFS Client 创建的,由 NFS Client 同步到 NFS Server 的共享目录。