首页 > 其他分享 >K8S静态PV&PVC配置

K8S静态PV&PVC配置

时间:2025-01-23 17:33:40浏览次数:1  
标签:PV name storage PVC nfs provisioner pv K8S root

持久卷与动态存储

NFS-PVC-PV实战案例

image-20241211100039817

1、创建后端存储NFS节点
[root@web04 ~]# mkdir -pv /kubelet/pv/pv00{1..3}
mkdir: created directory ‘/kubelet/pv’
mkdir: created directory ‘/kubelet/pv/pv001’
mkdir: created directory ‘/kubelet/pv/pv002’
mkdir: created directory ‘/kubelet/pv/pv003’
[root@web04 ~]# tree /kubelet/pv/
/kubelet/pv/
├── pv001
├── pv002
└── pv003
[root@web04 ~]# tail -n3 /etc/exports
/kubelet/pv/pv001 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)
/kubelet/pv/pv002 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)
/kubelet/pv/pv003 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)
[root@web04 ~]# exportfs
/kubelet/pv/pv001
		10.0.0.0/24
/kubelet/pv/pv002
		10.0.0.0/24
/kubelet/pv/pv003
		10.0.0.0/24
2、创建持久卷PersistentVolume
[root@k8smaster01 ~]# vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  # 持久卷3种回收策略
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 10.0.0.2
    path: /kubelet/pv/pv001
    # 持久卷4种访问模式
  accessModes:
  - ReadWriteMany
  # 持久卷容量
  capacity:
    storage: 5Gi
    
[root@k8smaster01 ~]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   5Gi        RWX            Retain           Available                                   17s
访问模式 简写 描述
ReadWriteOnce RWO 卷可以由单个节点作为读写挂载。如果多个 Pod 在同一节点上运行,它们可以共享同一个 PVC 以读写方式访问。
ReadOnlyMany ROX 卷可以由多个节点以只读方式挂载。这意味着数据可以被集群中的任何数量的 Pod 同时读取,但不允许任何写入操作。
ReadWriteMany RWX 卷可以由多个节点作为读写挂载。这允许集群中不同节点上的多个 Pod 同时对相同的 PVC 进行读写访问。这种模式适用于需要高并发读写的场景。
ReadWriteOncePod RWO (Pod) 从 Kubernetes v1.29 开始稳定支持,卷可以由单个 Pod 以读写方式挂载。无论该 Pod 运行在哪个节点上,它都是唯一能够对该 PVC 进行读写访问的 Pod。
回收策略 描述
Delete 当 PersistentVolumeClaim (PVC) 释放卷后,Kubernetes 将从系统中删除该卷。这意味着与该卷相关的所有数据都将被永久删除。此操作要求所使用的卷插件支持删除功能。
Recycle 当 PVC 释放卷后,卷将被回收并重新加入到未绑定的持久化卷池中,以供后续的 PVC 使用。这允许卷在释放后再次被使用,但要求所使用的卷插件支持回收功能。rm -rf /thevolume/*
Retain 当 PVC 释放卷后,卷将保持在其当前的 "Released" 状态,等待管理员进行手动回收或重新分配。这是默认的回收策略,适用于需要保留数据或手动管理卷的情况。

image-20241211103632709

列名 含义
NAME 持久卷(PersistentVolume)的名称。这是 PV 的唯一标识符。
CAPACITY 卷的容量,通常以字节为单位表示(例如 10Gi 表示 10 吉字节)。这指定了 PV 可以存储的最大数据量。
ACCESS MODES 卷的访问模式,定义了卷可以如何被挂载和访问。常见的访问模式包括 ReadWriteOnce (RWO), ReadOnlyMany (ROX), ReadWriteMany (RWX),以及 ReadWriteOncePod
RECLAIM POLICY 当 PersistentVolumeClaim (PVC) 释放卷时,Kubernetes 应如何处理该卷的回收策略。可能的值包括 Delete, RecycleRetain
STATUS 卷的状态。常见的状态包括:
- Available:卷可用,尚未被任何 PVC 绑定。
- Bound:卷已被 PVC 绑定。
- Released:PVC 已删除,但卷尚未被回收或删除。
- Failed:卷创建或绑定过程中出现问题。
CLAIM 如果卷已经被绑定到一个 PVC,则此列显示该 PVC 的命名空间和名称(格式为 <namespace>/<claim>)。如果卷未被绑定,则此列为空。
STORAGECLASS 创建该卷所使用的存储类(StorageClass)的名称。如果没有使用存储类,则此列可能为空或显示 <none>
VOLUMEATTRIBUTESCLASS 定义卷属性的类,用于描述卷的额外特性或要求。不是所有集群都会使用这一列。
REASON 如果卷处于非正常状态(如 Failed),则此列会提供导致问题的原因。
AGE 卷创建以来的时间长度。这通常是自卷创建以来经过的时间,以天、小时等为单位表示。
3、创建持久卷声明PersistentVolumeClaim
[root@k8smaster01 pvc]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    limits:
      storage: 2Gi
    requests:
      storage: 1Gi
      
[root@k8smaster01 ~]# kubectl get pv,pvc
NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
persistentvolume/nfs-pv   5Gi        RWX            Retain           Bound    default/nfs-pvc                           13m

NAME                            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-pvc   Bound    nfs-pv   5Gi        RWX                           5s

image-20241211104416208

4、创建Pod
[root@k8smaster01 ~]# cat nginx-deployment-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deployment-pv
  name: nginx-deployment-pv
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deployment-pv
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deployment-pv
    spec:
      volumes:
      - name: data-pvc
        persistentVolumeClaim:
          claimName: nfs-pvc
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: data-pvc
          mountPath: /usr/share/nginx/html
        resources: {}
status: {}

[root@k8smaster01 ~]# curl  10.100.1.126 -I
HTTP/1.1 403 Forbidden
Server: nginx/1.27.3
Date: Thu, 23 Jan 2025 09:15:37 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive


 
[root@k8smaster01 pvc]# kubectl exec -it pods/ng-pvc-6867d44796-czxbt sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# echo "HackMe~" > /usr/share/nginx/html/index.html
# 
# exit
[root@k8smaster01 pvc]# curl  -I 10.244.5.80
HTTP/1.1 200 OK
Server: nginx/1.27.3
Date: Wed, 11 Dec 2024 02:53:22 GMT
Content-Type: text/html
Content-Length: 8
Last-Modified: Wed, 11 Dec 2024 02:53:19 GMT
Connection: keep-alive
ETag: "6758fe9f-8"
Accept-Ranges: bytes
[root@k8smaster01 pvc]# curl 10.244.5.80
HackMe~

[root@web04 ~]# cat /kubelet/pv/pv001/index.html
HackMe~

清理

  • Retain -- manual reclamation
  • Recycle -- basic scrub (rm -rf /thevolume/*)
  • Delete -- delete the volume

NFS动态存储实战案例

Kubernetes 不包含内部 NFS 驱动。你需要使用外部驱动为 NFS 创建 StorageClass。

卷插件 内置配置器 配置示例
AzureFile Azure File
CephFS - -
FC - -
FlexVolume - -
iSCSI - -
Local - Local
NFS - NFS
PortworxVolume Portworx Volume
RBD Ceph RBD
VsphereVolume vSphere
1、开放后端存储NFS
[root@web04 ~]# mkdir -pv /kubelet/storage_class
[root@web04 ~]# tail -n1 /etc/exports
/kubelet/storage_class 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)
[root@web04 ~]# systemctl restart nfs-server.service
[root@web04 ~]# exportfs
/kubelet/storage_class
		10.0.0.0/24
2、部署NFS驱动器支持SC功能
[root@k8smaster01 ~]# git clone https://gitee.com/yinzhengjie/k8s-external-storage.git
[root@k8smaster01 ~]# cd k8s-external-storage/nfs-client/deploy/

# 修改配置文件
[root@k8smaster01 deploy]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # image: quay.io/external_storage/nfs-client-provisioner:latest
          image: registry.cn-hangzhou.aliyuncs.com/k8s_study_rfb/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.7
            - name: NFS_PATH
              value: /kubelet/storage_class
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.7
            path: /kubelet/storage_class


#修改动态存储类配置文件
[root@k8smaster01 deploy]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  # archiveOnDelete: "false"
  archiveOnDelete: "true"

#应用动态存储类
[root@k8smaster01 deploy]# kubectl apply -f class.yaml && kubectl get sc
storageclass.storage.k8s.io/managed-nfs-storage created
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  0s

[root@k8smaster01 deploy]# kubectl get sc
NAME                  PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  10m
3、应用RBAC角色
[root@k8smaster01 deploy]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

#应用RBAC授权角色
[root@k8smaster01 deploy]#  kubectl apply -f rbac.yaml

#应用NFS动态存储配置器
[root@k8smaster01 deploy]# kubectl apply -f deployment.yaml


[root@k8smaster01 deploy]# kubectl get pods,sc
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-5db5f57976-ghs8l   1/1     Running   0          4s

NAME                                              PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/managed-nfs-storage   fuseim.pri/ifs   Delete          Immediate           false                  11m
4、测试动态存储类
# PVC配置文件
[root@k8smaster01 deploy]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  #以注解的方式指定sc资源
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

#Pod配置文件
[root@k8smaster01 deploy]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
[root@k8smaster01 deploy]# kubectl apply -f test-claim.yaml -f test-pod.yaml


#后端存储目录
[root@web04 ~]# ll /kubelet/storage_class/
total 0
drwxrwxrwx 2 root root 21 Dec 11 15:29 default-test-claim-pvc-72dca5dd-eb1e-4826-a18a-0773acfe55cf
[root@web04 ~]# ll /kubelet/storage_class/default-test-claim-pvc-72dca5dd-eb1e-4826-a18a-0773acfe55cf/
total 0
-rw-r--r-- 1 root root 0 Dec 11 15:29 SUCCESS

image-20241211153103798

标签:PV,name,storage,PVC,nfs,provisioner,pv,K8S,root
From: https://www.cnblogs.com/lin-strive/p/18688249

相关文章

  • k8s工作负载-RS&&DP&&DS
    1RS:ReplicaSet的目的是维护一组在任何时候都处于运行状态的Pod副本的稳定集合。因此,它通常用来保证给定数量的、完全相同的Pod的可用性。它也是deployment资源的基础资源,来整副本的稳定性。23RS资源实例4[root@k8smaster01~]#catnginx-rs.yaml5......
  • jenkins-k8s pod方式动态生成slave节点
    一.简述:   使用Jenkins和Kubernetes(k8s)动态生成Slave节点是一种高效且灵活的方式来管理CI/CD流水线。通过这种方式,Jenkins可以根据需要在Kubernetes集群中创建和销毁Pod来执行任务,从而充分利用集群资源并实现更好的隔离性和安全性。二.环境部署: 1.......
  • 记一次k8s集群挂载动态nfs存储故障
    环境查看系统环境#cat/etc/redhat-releaseCentOSLinuxrelease7.9.2009(Core)#uname-aLinuxCentOS7K8SNode0030683.10.0-1160.el7.x86_64#1SMPMonOct1916:18:59UTC2020x86_64x86_64x86_64GNU/Linux软件环境#kubectlgetnodeNAMESTA......
  • K8S标签相关的管理
     K8S标签相关的管理标签作用:kv格式,对资源进行标签化,通过标签对资源进行关联管理,以松散耦合的方式。通过labels进行定义、结合selector选择器进行管理标签管理的范围:[root@k8smaster01~]#kubectllabelapiservices.apiregistration.k8s.io......
  • k8s的镜像拉取策略
    Kubernetes镜像更新策略在Kubernetes(简称K8s)中,容器镜像的更新行为主要由imagePullPolicy参数控制。该策略决定了Kubernetes在启动或重启容器时是否从镜像仓库拉取新的镜像版本。常见的镜像更新策略有三种:1.Always如果容器的imagePullPolicy设置为Always,每次创建Pod......
  • K8S从私有仓库拉取镜像
    pod结合secret下载私有镜像1、保证节点机器可以登录仓库dockerlogin--usernameadmin--passwordHarbor12345harbor.hack.me2、结合sercet资源针对密钥文件进行加密kubectlcreatesecretgenericregcred--from-file=/root/.docker/config.json--type=kubernetes.io/......
  • tvpvar模型matlab代码及自学手册
    tvp-var模型matlab代码及自学手册,TVP-var新手自学入门必备。资源文件列表derek_zhu201409252059(tvpvar)/drawimp.m , 2080derek_zhu201409252059(tvpvar)/fAt.m , 631derek_zhu201409252059(tvpvar)/fGeweke.m , 768derek_zhu201409252059(tvpvar)/finvm.m , 544de......
  • K8s 灰度发布实战:通过 Ingress 注解轻松实现流量分割与渐进式发布
    在现代微服务架构中,应用的更新和发布是一个高频且关键的操作。如何在不影响用户体验的前提下,安全、平稳地将新版本应用推送到生产环境,是每个开发者和运维团队必须面对的挑战。灰度发布(GrayRelease)作为一种渐进式发布策略,能够有效降低发布风险,而Kubernetes的Ingress注解功能为......
  • OpenWRT配置旁路由/中继模式,同时配置作为NAS必备的IPv6公网IP
    1.环境和要达成的目标1.1目标主路由已配置好拨号,DHCP,IPv6已刷OpenWRT路由B70作为中继路由,提高覆盖,解决一些老旧只能设备接入问题。OpenWRT路由同时插入移动硬盘,配置WebDAV和smba作为NAS使用,所以此路自身要能获取到IPv6地址。我的OpenWRT路由是极路由4,刷的是最新的OpenWRT24......
  • JSP某医学院实习管理系统7s3pv--程序+源码+数据库+调试部署+开发环境
    本系统(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面。系统程序文件列表技术要求:开发语言:JSP前端使用:HTML5,CSS,JSP动态网页技术后端使用SpringBoot,Spring技术主数据库使用MySQL开题报告内容一、研究背景与意义随着医学教育的快......