首页 > 其他分享 >Docker基础知识 (22) - Kubernetes(五) | 在 K8s 集群上部署 NFS 实现共享存储 (2)

Docker基础知识 (22) - Kubernetes(五) | 在 K8s 集群上部署 NFS 实现共享存储 (2)

时间:2022-11-23 21:12:56浏览次数:66  
标签:22 Kubernetes client nfs subdir provisioner NFS k8s external


在 “Docker基础知识 (21) - Kubernetes(四) | 在 K8s 集群上部署 NFS 实现共享存储 (1)” 里我们演示如何在 K8s 集群中部署 NFS 和创建静态 PV/PVC,本文将继续演示如何创建动态 PV/PVC。

Kubernetes 的共享存储详细介绍,请参考 “系统架构与设计(7)- Kubernetes 的共享存储”。

NFS (Network File System),即网络文件系统,是 FreeBSD 支持的文件系统中的一种。NFS 允许一个系统在网络上与它人共享目录和文件。通过使用 NFS,用户和程序可以像访问本地文件一样访问远端系统上的文件。


1. 部署环境

    虚拟机: Virtual Box 6.1.30(Windows 版)
    操作系统: Linux CentOS 7.9 64位
    Docker 版本:20.10.7
    Docker Compose 版本:2.6.1
    Kubernetes 版本:1.23.0

    工作目录:/home/k8s
    Linux 用户:非 root 权限用户 (用户名自定义,这里以 xxx 表示),属于 docker 用户组,

    1) 主机列表

        主机名          IP            角色       操作系统
        k8s-master  192.168.0.10    master      CentOS 7.9
        k8s-node01  192.168.0.11    node        CentOS 7.9


2. NFS Server 配置

    # 修改共享目录
    $ sudo vim /etc/exports

        # 共享目录    
        /home/k8s/share   192.168.0.0/16(rw,sync,all_squash,anonuid=1000,anongid=1000)

    # 重启 nfs 服务
    $ sudo systemctl restart nfs
    
    # 查看服务端里面可以挂载的目录
    $ showmount -e 192.168.0.10

        Export list for 192.168.0.10:
        /home/k8s/share   192.168.0.0/16   


3. 部署 nfs-subdir-external-provisioner

    nfs-subdir-external-provisioner 是一个 Kubernetes 的简易 NFS 的外部 provisioner,本身不提供 NFS,需要现有的 NFS 服务器提供。

    GitHub:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

    1) 下载 nfs-subdir-external-provisioner

        $ cd /home/k8s
        $ mkdir nfs-subdir-external-provisioner
        $ cd nfs-subdir-external-provisioner       
        
        $ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
        $ cp -R nfs-subdir-external-provisioner/deploy ./

        注:使用 Git 下载,把 nfs-subdir-external-provisioner/deploy 目录复制到 /home/k8s/nginx-test/nfs-subdir-external-provisioner 目录下。

            或到 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 页面下载 zip 包,zip 包解开后的目录是 nfs-subdir-external-provisioner-master,复制 deploy 目录步骤和上面一样。  

            本文下载的版本是 4.0.2。

    2) 部署 rbac.yaml 文件

        $ cd /home/k8s/nfs-subdir-external-provisioner/deploy
        $ cat rbac.yaml

              apiVersion: v1
              kind: ServiceAccount
              metadata:
                name: nfs-client-provisioner
                # replace with namespace where provisioner is deployed
                namespace: default
              ---
              kind: ClusterRole
              apiVersion: rbac.authorization.k8s.io/v1
              metadata:
                name: nfs-client-provisioner-runner
              rules:
                - apiGroups: [""]
                  resources: ["persistentvolumes"]
                  verbs: ["get", "list", "watch", "create", "delete"]
                - apiGroups: [""]
                  resources: ["persistentvolumeclaims"]
                  verbs: ["get", "list", "watch", "update"]
                - apiGroups: ["storage.k8s.io"]
                  resources: ["storageclasses"]
                  verbs: ["get", "list", "watch"]
                - apiGroups: [""]
                  resources: ["events"]
                  verbs: ["create", "update", "patch"]
              ---
              kind: ClusterRoleBinding
              apiVersion: rbac.authorization.k8s.io/v1
              metadata:
                name: run-nfs-client-provisioner
              subjects:
                - kind: ServiceAccount
                  name: nfs-client-provisioner
                  # replace with namespace where provisioner is deployed
                  namespace: default
              roleRef:
                kind: ClusterRole
                name: nfs-client-provisioner-runner
                apiGroup: rbac.authorization.k8s.io
              ---
              kind: Role
              apiVersion: rbac.authorization.k8s.io/v1
              metadata:
                name: leader-locking-nfs-client-provisioner
                # replace with namespace where provisioner is deployed
                namespace: default
              rules:
                - apiGroups: [""]
                  resources: ["endpoints"]
                  verbs: ["get", "list", "watch", "create", "update", "patch"]
              ---
              kind: RoleBinding
              apiVersion: rbac.authorization.k8s.io/v1
              metadata:
                name: leader-locking-nfs-client-provisioner
                # replace with namespace where provisioner is deployed
                namespace: default
              subjects:
                - kind: ServiceAccount
                  name: nfs-client-provisioner
                  # replace with namespace where provisioner is deployed
                  namespace: default
              roleRef:
                kind: Role
                name: leader-locking-nfs-client-provisioner
                apiGroup: rbac.authorization.k8s.io


        # 执行创建命令
        $ kubectl apply -f rbac.yaml

            serviceaccount/nfs-client-provisioner created
            clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
            clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
            role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
            rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created


        # 查看 ServiceAccount,可以运行 kubectl get sa  
        $ kubectl get ServiceAccount

            NAME                     SECRETS   AGE
            default                  1         5d10h
            nfs-client-provisioner   1         82s


        $ kubectl get ClusterRole nfs-client-provisioner-runner

            NAME                            CREATED AT
            nfs-client-provisioner-runner   2022-11-22T11:39:51Z  


        $ kubectl get ClusterRoleBinding run-nfs-client-provisioner   

            NAME                         ROLE                                        AGE
            run-nfs-client-provisioner   ClusterRole/nfs-client-provisioner-runner   3m45s


        $ kubectl get Role leader-locking-nfs-client-provisioner

            NAME                                    CREATED AT
            leader-locking-nfs-client-provisioner   2022-11-21T20:16:45Z


        $ kubectl get RoleBinding leader-locking-nfs-client-provisioner

            NAME                                    ROLE                                         AGE
            leader-locking-nfs-client-provisioner   Role/leader-locking-nfs-client-provisioner   15h


    3) 修改 deployment.yaml 文件

        $ cd /home/k8s/nfs-subdir-external-provisioner/deploy
        $ vim deployment.yaml

            apiVersion: apps/v1
            kind: Deployment
            metadata:
              name: nfs-client-provisioner
              labels:
                app: nfs-client-provisioner
                # replace with namespace where provisioner is deployed
              namespace: default
            spec:
              replicas: 1
              strategy:
                type: Recreate
              selector:
                matchLabels:
                  app: nfs-client-provisioner
              template:
                metadata:
                  labels:
                    app: nfs-client-provisioner
                spec:
                  serviceAccountName: nfs-client-provisioner
                  containers:
                    - name: nfs-client-provisioner
                      #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
                      image: registry.cn-hangzhou.aliyuncs.com/weiyigeek/nfs-subdir-external-provisioner:v4.0.2
                      volumeMounts:
                        - name: nfs-client-root
                          mountPath: /persistentvolumes
                      env:
                        - name: PROVISIONER_NAME
                          value: k8s-sigs.io/nfs-subdir-external-provisioner
                        - name: NFS_SERVER
                          value: 192.168.0.10   # NFS server
                        - name: NFS_PATH
                          value: /home/k8s/share      # NFS 共享目录
                  volumes:
                    - name: nfs-client-root
                      nfs:
                        server: 192.168.0.10    # NFS server
                        path: /home/k8s/share         # NFS 共享目录

            注:修改镜像来源为 registry.cn-hangzhou.aliyuncs.com/weiyigeek/nfs-subdir-external-provisioner:v4.0.2,修改 NFS server和共享目录 。

        # 执行创建命令
        $ kubectl apply -f deployment.yaml

            deployment.apps/nfs-client-provisioner created

        # 查看 Pod
        $ kubectl get pod

            NAME                                      READY   STATUS    RESTARTS   AGE
            nfs-client-provisioner-d66c499b4-6wxsh    1/1     Running   0          2m42s

       
        # 查看 Pod 日志 (运行状态)
        $ kubectl logs nfs-client-provisioner-d66c499b4-6wxsh

            I1122 13:00:23.794534       1 leaderelection.go:242] attempting to acquire leader lease  nginx-test/k8s-sigs.io-nfs-subdir-external-provisioner...
            I1122 13:00:23.812141       1 leaderelection.go:252] successfully acquired lease nginx-test/k8s-sigs.io-nfs-subdir-external-provisioner
            I1122 13:00:23.812409       1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec!
            I1122 13:00:23.812757       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"nginx-test", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"35a92d4a-34c7-4448-8f60-03e743ece267", APIVersion:"v1", ResourceVersion:"338177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec became leader
            I1122 13:00:23.912953       1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d66c499b4-6wxsh_a43417c5-a55b-45a8-9e85-5808c9c980ec!

            注:已处于正常运行状态。


4. 创建动态 PV/PVC

    1) 修改 class.yaml (StorageClass)文件
    
        $ cd /home/k8s/nfs-subdir-external-provisioner/deploy
        $ vim class.yaml

            apiVersion: storage.k8s.io/v1
            kind: StorageClass
            metadata:
              name: nfs-client-storage-class           
            provisioner: nfs-client   # or choose another name, must match deployment's env PROVISIONER_NAME'
            parameters:
              archiveOnDelete: "false"

            注:修改 metadata.name:nfs-client-storage-class 。

        # 执行创建命令
        $ kubectl apply -f class.yaml

            storageclass.storage.k8s.io/nfs-client-storage-class created

        # 查看 StorageClass,可以运行 kubectl get sc
        $ kubectl get StorageClass  

            NAME                      PROVISIONER                                    RECLAIMPOLICY   VOLUMEBINDINGMODE ...
            nfs-client-storage-class  k8s-sigs.io/nfs-subdir-external-provisioner    Delete          Immediate            

 

    2) 修改 test-claim.yaml 文件

        $ cd /home/k8s/nfs-subdir-external-provisioner/deploy
        $ vim test-claim.yaml   

            apiVersion: v1
            kind: PersistentVolumeClaim
            metadata:
              name: test-claim
            spec:
              storageClassName: nfs-client-storage-class
              accessModes:
                - ReadWriteMany
              resources:
                requests:
                  storage: 1Mi

            注:修改 spec.storageClassName: nfs-client-storage-class 。
        
        # 执行创建命令
        $ kubectl apply -f test-claim.yaml

            persistentvolumeclaim/test-claim created

        # 查看 PVC
        $ kubectl get pvc

            NAME         STATUS   VOLUME                                     CAPACITY   ACCESS...   STORAGECLASS               AGE
            test-claim   Bound    pvc-8c2f1413-d0b5-47f7-93d9-83c23ad9119b   1Mi        RWX            nfs-client-storage-class   13s


        # 查看 PV
        $ kubectl get pv

            NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS               REASON   AGE
            pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a   1Mi        RWX            Delete           Bound    default/test-claim   nfs-client-storage-class            25s


    3) 修改 test-pod.yaml 文件
        
        $ cd /home/k8s/nfs-subdir-external-provisioner/deploy
        $ cat test-pod.yaml 

            kind: Pod
            apiVersion: v1
            metadata:
              name: test-pod
            spec:
              containers:
              - name: test-pod
                image: busybox:stable
                command:
                  - "/bin/sh"
                args:
                  - "-c"
                  - "touch /mnt/SUCCESS && exit 0 || exit 1"
                volumeMounts:
                  - name: nfs-pvc
                    mountPath: "/mnt"
              restartPolicy: "Never"
              volumes:
                - name: nfs-pvc
                persistentVolumeClaim:
                    claimName: test-claim


        # 执行创建命令
        $ kubectl apply -f test-pod.yaml

            pod/test-pod created

        # 查看 Pod
        $ kubectl get pod

            NAME                                     READY   STATUS      RESTARTS   AGE
            nfs-client-provisioner-d66c499b4-tr2pt   1/1     Running     0          18m
            test-pod                                 0/1     Completed   0          5m18s
 


        # 查看 master 上 NFS 共享目录
        $ cd /home/k8s/share  
        $ ls -la

            total 0
            drwxrwxr-x 3 xxx xxx  73 Nov 22 08:40 .
            drwxr-xr-x 5 xxx xxx 125 Nov 22 08:31 ..
            drwxrwxrwx 2 xxx xxx  21 Nov 22 08:43 default-test-claim-pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a  


        注:NFS Provisioner 自动生成了 default-test-claim-pvc-ac6217f4-eed5-48ca-8eb6-a71dd12ce23a   目录,在目录下可以看到已经生成 SUCCESS 文件。
        
            NFS Provisioner 创建的目录命名方式为 “namespace名称-pvc名称-pv名称”,pv 名称是随机字符串,所以每次只要不删除 PVC,那么 K8s 中的与存储绑定将不会丢失。

            动态 PV 创建的目录,是 NFS Client 创建的,由 NFS Client 同步到 NFS Server 的共享目录。


标签:22,Kubernetes,client,nfs,subdir,provisioner,NFS,k8s,external
From: https://www.cnblogs.com/tkuang/p/16919806.html

相关文章

  • P8818 [CSP-S 2022] 策略游戏
    [CSP-S2022]策略游戏实际上就是先手的那个人取保底,后手的那个人取此刻的最佳值。我一开始以为两个人都取保底,谁想到这么没意思……那么就是线段树小应用,分别维护区间......
  • P8817 [CSP-S 2022] 假期计划
    [CSP-S2022]假期计划我第一眼看的时候怎么搞都会多一个\(O(\logn)\),还在想是不是有什么高深做法……然后想到边权为\(1\)的时候好像根本不需要用Dijkstra,直接BFS......
  • Docker基础知识 (21) - Kubernetes(四) | 在 K8s 集群上部署 NFS 实现共享存储 (1)
    在“Docker基础知识(20)-Kubernetes(三)|在K8s集群上部署Nginx”里部署的Nginx,通过存储卷(volumes)挂载到master的/home/k8s/nginx-test/nginx目录下的子目录......
  • P8819 [CSP-S 2022] 星战
    [CSP-S2022]星战这么长时间过去都快不会写题解了。嗯……不过还是稍微记一下会比较好。题意看完之后就是让我们去判断整张图是否是一个内向基环树森林。然后这个事情......
  • 【2022-11-23】爬虫从入门到入狱(一)
    一、爬虫介绍#爬虫介绍: 网络爬虫(webcrawler)又称为网络蜘蛛(webspider)或网络机器人(webrobot),另外一些不常使用的名字还有蚂蚁、自动索引、模拟程序或蠕虫,同时它也是“物联......
  • day22-web开发会话技术04
    WEB开发会话技术0414.Session生命周期14.1生命周期说明publicvoidsetMaxInactiveInterval(intinterval):设置session的超时时间(以秒为单位),超过指定的时长,session......
  • 2022.11.23
    一说到清朝的闭关锁国,大家都很痛心疾首、愤愤不平,觉得要是没有闭关锁国的话,可能中国依然是强国,也能避免后来的那些耻辱了。但有意思的是,现在还是有很多人闭关锁国,他们对......
  • Win11 22H2怎么绕过开机微软账户登录?
     Win1122H2绕过开机微软账户登录的方法:首次启动时需要断网(台式机记得拔网线),然后在首次启动出现联网界面时按下Shift+F10调出cmd输入命令:oobe\BypassNRO.cmd ......
  • @tranctional +@feighclient 注解的一些细节2022-11-23
    rollbackfall  有异常,则回滚  该属性用于设置需要进行回滚的异常类数组,当方法中抛出指定异常数组中的异常时,则进行事务回滚, 指定单一异常类:@Transactional(ro......
  • 2022 云原生编程挑战赛圆满收官,见证冠军战队的诞生
    11月3日,天池大赛·第三届云原生编程挑战赛在杭州云栖大会圆满收官。三大赛道18大战队手历经3个月激烈的角逐,终于交上了满意的答卷,同时也捧回了属于他们的荣耀奖杯。......