首页 > 其他分享 >持久存储卷

持久存储卷

时间:2022-09-24 15:24:19浏览次数:70  
标签:master1 存储 pv volume pvc 持久 k8s root

  使用网络存储卷实现持久化存储,必须清晰了解所用到的网络存储系统的访问细节才能完成存储卷的相关的配置任务,如:NFS存储卷的server和path字段的配置就依赖于服务器地址和共享目录路径。这与kubernetes的向用户和开发隐藏底层架构的目标有所背离,对存储资源的使用最好也能像使用计算资源一样,用户和开发人员无须了解pod资源究竟运行于哪个节点,也无须了解存储系统是什么设备以及位于何处。为此,kubernetes的Persistent Volume 子系统在用户和管理员之间添加了一个抽象层,从而使得存储系统的使用和管理职能互相解耦。

  参考官网:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/

一、概念

1. PV概念

  PersistentVolume(PV)是指由集群管理员配置提供的某存储系统上的一段存储空间,它是对底层共享存储的抽象,将共享存储作为一种可由用户申请使用的资源,实现了“存储消费”机制。通过存储插件机制,pv支持使用多种网络存储系统或云端存储等多种后端存储系统,如Ceph、GlusterFS、NFS 等。PV是集群级别的资源,不属于任何名称空间。

2. PVC概念

  PersistentVolumeClaim(PVC)持久化卷声明:是用户存储的一种声明,PVC 和 Pod 比较类似,Pod 消耗的是节点,PVC 消耗的是 PV 资源,Pod 可以请求 CPU 和内存,而 PVC 可以请求特定的存储空间和访问模式。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC 即可。

3.  pod存储卷、PVC、PV的调用关系

  用户对PV资源的使用需要通过PersistentVolumeClaim(PVC)提出的使用申请(或称为声明)来完成绑定,是PV资源的消费者,它向PV申请特定大小的空间及访问模式(如rw或ro),从而创建出PVC存储卷,而后再由pod资源通过PersistentVolumeClaim存储卷关联使用。

4. StorageClass概念

  通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,用户根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,通过PVC直接向意向的类别发出申请,匹配由管理员事先创建的PV,或者由其按需为用户动态创建PV,这样做甚至免去了需要事先创建PV的过程。

二、创建PV

1. PV的资源清单字段说明

  查看定义pv需要的字段

[root@k8s-master1 ~]# kubectl explain pv
KIND:     PersistentVolume
VERSION:  v1

DESCRIPTION:
     PersistentVolume (PV) is a storage resource provisioned by an
     administrator. It is analogous to a node. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes

FIELDS:
   apiVersion   <string> #api版本
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string> #类型
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>  #元数据
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object> #用于定义pv的容量,访问模式和回收策略的
     Spec defines a specification of a persistent volume owned by the cluster.
     Provisioned by an administrator. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

   status       <Object> #状态
     Status represents the current information/status for the persistent volume.
     Populated by the system. Read-only. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

  查看PV spec的几个通用字段,了解PV对存储系统的支持可通过哪些插件来实现

[root@k8s-master1 ~]# kubectl explain pv.spec
KIND:     PersistentVolume
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines a specification of a persistent volume owned by the cluster.
     Provisioned by an administrator. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

     PersistentVolumeSpec is the specification of a persistent volume.

FIELDS:
   accessModes  <[]string>  
#访问模式,每个 PV 的访问模式都设置为特定卷支持的特定模式。
AccessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore <Object> AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <Object> AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <Object> AzureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity <map[string]string>
#PV 将具有特定的存储容量。目前,存储大小是唯一可以设置或请求的资源。未来的属性可能包括 IOPS、吞吐量等。 A description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs <Object> CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder <Object> Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef <Object> ClaimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi <Object> CSI represents storage that is handled by an external CSI driver (Beta feature). fc <Object> FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume <Object> FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker <Object> Flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk <Object> GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs <Object> Glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath <Object> HostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi <Object> ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local <Object> Local represents directly-attached storage with node affinity mountOptions <[]string> #挂载选项组成的列表,如:ro,soft和hard等 A list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs <Object> NFS represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity <Object>
#节点亲和性,PV 可以指定节点亲和性来定义限制该卷可以从哪些节点访问的约束。使用 PV 的 Pod 只会被调度到由节点亲和性选择的节点 NodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy <string>
#PV空间被释放时的处理机制;
What happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming photonPersistentDisk <Object> PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume <Object> PortworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte <Object> Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd <Object> RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO <Object> ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName <string>
#当前PV所属的StorageClass的名称;默认为空值,即不属于任何StorageClass。特定类的 PV 只能绑定到请求该类的 PVC Name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos <Object> StorageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeMode <string>
#卷模型,用于指定此卷可被用作文件系统还是裸格式的块设备;默认为Filesystem volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. vsphereVolume <Object> VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine

其中,accessModes和persistentVolumeReclaimPolicy字段含义具体如下:

  1)accessModes <[]string>  #访问模式,每个 PV 的访问模式都设置为特定卷支持的特定模式。

例如,NFS 可以支持多个读/写客户端,但特定的 NFS PV 可能在服务器上导出为只读。每个 PV 都有自己的一组访问模式来描述特定 PV 的功能。

常用的访问模式有:

ReadWriteOnce:可以由单个节点以读写方式挂载;

ReadOnlyMany:可被多个节点同时以只读方式挂载;

ReadWriteMany:可被多个节点同时以读写方式挂载;

ReadWriteOncePod:可以由单个 Pod 以读写方式挂载

  2)persistentVolumeReclaimPolicy # PV空间被释放时的处理机制。

可用的类型仅为Retain(默认),Recycle或Delete;

Retain:保持不动,由管理员随后手动回收;

Recycle:空间回收,即删除存储卷目录下所有的文件(包括子目录和隐藏文件),目前仅NFS和hostPath支持此操作;【已废弃】

Delete:删除存储卷,仅部分云端存储系统支持,如:AWS EBS、GCE PD、Azure Disk和cinder,默认是删除动态创建的pv

2. 创建PV资源

 下面的资源清单配置示例中定义了一个使用NFS存储后端的PV,支持多路的读写操作。待后端存储系统满足要求时,即可进行如下PV资源的创建:

  1)创建nfs共享目录

  在NFS服务器上创建NFS需要的共享文件系统

[root@k8s-master1 ~]# mkdir pv
[root@k8s-master1 ~]# cd pv
[root@k8s-master1 pv]# mkdir /data/volume_test/v{1,2,3,4,5,6,7,8,9,10} -p
[root@k8s-master1 pv]# vim /etc/exports  #配置共享目录
[root@k8s-master1 pv]# cat /etc/exports
/data/volumes 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v1  10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v2 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v3 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v3 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v4 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v5 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v6 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v7 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v8 10.0.0.131/24(rw,no_root_squash)
/data/volume_test/v9 10.0.0.131/24(rw,no_root_squash)
[root@k8s-master1 pv]# exportfs -arv  #是配置生效
exportfs: duplicated export entries:
exportfs:       10.0.0.131/24:/data/volume_test/v3
exportfs:       10.0.0.131/24:/data/volume_test/v3
exporting 10.0.0.131/24:/data/volume_test/v9
exporting 10.0.0.131/24:/data/volume_test/v8
exporting 10.0.0.131/24:/data/volume_test/v7
exporting 10.0.0.131/24:/data/volume_test/v6
exporting 10.0.0.131/24:/data/volume_test/v5
exporting 10.0.0.131/24:/data/volume_test/v4
exporting 10.0.0.131/24:/data/volume_test/v3
exporting 10.0.0.131/24:/data/volume_test/v2
exporting 10.0.0.131/24:/data/volume_test/v1
exporting 10.0.0.131/24:/data/volumes
[root@k8s-master1 pv]#

  2)编写PV资源配置清单,创建PV资源

[root@k8s-master1 pv]# vim pv-demo.yaml
[root@k8s-master1 pv]# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v1
spec:
  capacity:
    storage: 1Gi   #pv存储空间容量
  accessModes: ["ReadWriteOnce"]  #访问模式
  nfs:
    path: /data/volume_test/v1  #使用nfs存储后端的共享目录创建pv
    server: 10.0.0.131  #nfs服务器地址
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v2
spec:
  persistentVolumeReclaimPolicy: Delete
  capacity:
    storage: 2Gi
  accessModes: ["ReadWriteMany"]
  nfs:
    path: /data/volume_test/v2
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v3
spec:
  capacity:
    storage: 3Gi
  accessModes: ["ReadOnlyMany"]
  nfs:
    path: /data/volume_test/v3
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v4
spec:
  capacity:
    storage: 4Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v4
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v5
spec:
  capacity:
    storage: 5Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v5
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v6
spec:
  capacity:
    storage: 6Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v6
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v7
spec:
  capacity:
    storage: 7Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v7
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v8
spec:
  capacity:
    storage: 8Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v8
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v9
spec:
  capacity:
    storage: 9Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v9
    server: 10.0.0.131
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: v10
spec:
  capacity:
    storage: 10Gi
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  nfs:
    path: /data/volume_test/v10
    server: 10.0.0.131

[root@k8s-master1 pv]# kubectl apply -f pv-demo.yaml
persistentvolume/v1 created
persistentvolume/v2 created
persistentvolume/v3 created
persistentvolume/v4 created
persistentvolume/v5 created
persistentvolume/v6 created
persistentvolume/v7 created
persistentvolume/v8 created
persistentvolume/v9 created
persistentvolume/v10 created
[root@k8s-master1 pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
v1     1Gi        RWO            Retain           Available                                   24s
v10    10Gi       RWO,RWX        Retain           Available                                   24s
v2     2Gi        RWX            Delete           Available                                   24s
v3     3Gi        ROX            Retain           Available                                   24s
v4     4Gi        RWO,RWX        Retain           Available                                   24s
v5     5Gi        RWO,RWX        Retain           Available                                   24s
v6     6Gi        RWO,RWX        Retain           Available                                   24s
v7     7Gi        RWO,RWX        Retain           Available                                   24s
v8     8Gi        RWO,RWX        Retain           Available                                   24s
v9     9Gi        RWO,RWX        Retain           Available                                   24s

  使用资源的查看命令可列出PV资源的相关信息,创建完成的PV资源可能处于下列四种状态中的某一种,它们代表PV资源的生命周期中的各个阶段。

Available:可用状态的自由资源,尚未被PVC绑定

Bound:已经绑定到PVC

Release:绑定的PVC已经被删除,但是资源尚未被集群回收

Failed:因自动回收资源失败而处于故障状态

三、创建PVC

  PVC是存储卷类型的资源,它通过申请占用某个PV而创建,它与PV是一对一的关系,用户无须关心底层实现细节。申请时,用户只需要指定目标空间的大小,访问模式、PV标签选择器和StorageClass等相关信息即可。

1. 查看PVC的spec字段的可嵌套字段

[root@k8s-master1 ~]# kubectl explain pvc.spec
KIND:     PersistentVolumeClaim
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the desired characteristics of a volume requested by a pod
     author. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

     PersistentVolumeClaimSpec describes the common attributes of storage
     devices and allows a Source for provider-specific attributes

FIELDS:
   accessModes  <[]string> #当前pvc的访问模式,其可用模式与PV相同
     AccessModes contains the desired access modes the volume should have. More
     info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

   dataSource   <Object>
     This field can be used to specify either: * An existing VolumeSnapshot
     object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC
     (PersistentVolumeClaim) * An existing custom resource that implements data
     population (Alpha) In order to use custom resource types that implement
     data population, the AnyVolumeDataSource feature gate must be enabled. If
     the provisioner or an external controller can support the specified data
     source, it will create a new volume based on the contents of the specified
     data source.

   resources    <Object> 
#当前PVC存储卷需要占用的资源量最小值;目前,PVC的资源限定仅指其空间大小 Resources represents the minimum resources the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector <Object>
#绑定时对PV应用的标签选择器(matchLabels)或匹配条件表达式(matchExpressions),用于挑选要绑定的PV;
如果同时指定了两种挑选机制,则必须同时满足两种选择机制的PV才能被选中。 A label query over volumes to consider for binding. storageClassName <string> #所依赖的存储类的名称 Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode <string>
#卷模型,用于指定此卷可被用作文件系统还是裸格式的块设备;默认为Filesystem volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName <string> #用于直接指定要绑定的PV的卷名 VolumeName is the binding reference to the PersistentVolume backing this claim. [root@k8s-master1 ~]#

2. 创建PVC资源

  下面的配置清单定义了一个PVC资源示例

[root@k8s-master1 pv]# vim pvc-demo1.yaml
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# cat pvc-demo1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc1
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi
[root@k8s-master1 pv]# kubectl apply -f pvc-demo1.yaml
persistentvolumeclaim/my-pvc1 created
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc1   Bound    v2       2Gi        RWX                           7s    Filesystem
[root@k8s-master1 pv]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE   VOLUMEMODE
v1     1Gi        RWO            Retain           Available                                             18m   Filesystem
v10    10Gi       RWO,RWX        Retain           Available                                             18m   Filesystem
v2     2Gi        RWX            Delete           Bound       default/my-pvc1                           18m   Filesystem
v3     3Gi        ROX            Retain           Available                                             18m   Filesystem
v4     4Gi        RWO,RWX        Retain           Available                                             18m   Filesystem
v5     5Gi        RWO,RWX        Retain           Available                                             18m   Filesystem
v6     6Gi        RWO,RWX        Retain           Available                                             18m   Filesystem
v7     7Gi        RWO,RWX        Retain           Available                                             18m   Filesystem
v8     8Gi        RWO,RWX        Retain           Available                                             18m   Filesystem
v9     9Gi        RWO,RWX        Retain           Available                                             18m   Filesystem

  可以看到v2的PV的STATUS是Bound,表示这个pv已经被my-pvc1绑定了,而my-pvc1的pvc绑定到的是v2的pv可使用的容量是2G。

  创建好PVC资源后,即可在pod资源中通过persistentVolumeClaim存储卷引用它,而后挂载于容器中进行数据持久化。需要注意的是,PV是集群级别的资源,而PVC则隶属于名称空间,因此,PVC在绑定目标PV时不受名称空间的限制,但是POD引用PVC时,则只能是属于同一名称空间中的资源。

四、在pod中使用PVC

  在pod资源中调用PVC资源,只需要在定义volumes时使用persistentVolumeClaim字段嵌套指定两个字段即可:

[root@k8s-master1 ~]# kubectl explain pod.spec.volumes.persistentVolumeClaim
KIND:     Pod
VERSION:  v1

RESOURCE: persistentVolumeClaim <Object>

DESCRIPTION:
     PersistentVolumeClaimVolumeSource represents a reference to a
     PersistentVolumeClaim in the same namespace. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

     PersistentVolumeClaimVolumeSource references the user's PVC in the same
     namespace. This volume finds the bound PV and mounts that volume for the
     pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around
     another type of volume that is owned by someone else (the system).

FIELDS:
   claimName    <string> -required-  #要调用的PVC存储卷的名称,PVC卷要与pod在同一名称空间中
     ClaimName is the name of a PersistentVolumeClaim in the same namespace as
     the pod using this volume. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   readOnly     <boolean>  #是否将存储卷强制挂载为只读模式,默认为false
     Will force the ReadOnly setting in VolumeMounts. Default false.

1. 创建pod资源使用PVC

  下面的清单定义一个pod资源,直接使用上面创建的pvc资源

[root@k8s-master1 pv]# vim pod-pvc.yaml
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# cat pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pvc-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: my-pvc1
[root@k8s-master1 pv]# kubectl apply -f pod-pvc.yaml
pod/pvc-pod created
[root@k8s-master1 pv]# kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pvc-pod   1/1     Running   0          77s   10.244.36.75   k8s-node1   <none>           <none>
[root@k8s-master1 pv]# kubectl describe pods pvc-pod
Name:         pvc-pod
Namespace:    default
Priority:     0
Node:         k8s-node1/10.0.0.132
Start Time:   Sat, 24 Sep 2022 14:08:59 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 10.244.36.75/32
              cni.projectcalico.org/podIPs: 10.244.36.75/32
Status:       Running
IP:           10.244.36.75
IPs:
  IP:  10.244.36.75
Containers:
  nginx:
    Container ID:   docker://b8152bb121414fed127693d51dd4cead76be69b8a10391be5cc3bfa1db08bfed
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 24 Sep 2022 14:09:02 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from html (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  html:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-pvc1
    ReadOnly:   false
  default-token-5n29f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5n29f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  100s  default-scheduler  Successfully assigned default/pvc-pod to k8s-node1
  Normal  Pulled     97s   kubelet            Container image "nginx:latest" already present on machine
  Normal  Created    97s   kubelet            Created container nginx
  Normal  Started    97s   kubelet            Started container nginx

2. 测试数据持久化

  资源创建完成后,可以看到pod正处于running状态。登录pod中测试数据的持久化

[root@k8s-master1 pv]# kubectl exec -it pvc-pod -- /bin/sh
# cd /usr/share/nginx/html
# ls
# pwd
/usr/share/nginx/html
# echo "hello world" >>index.html
# cat index.html
hello world
# exit
[root@k8s-master1 pv]# kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
pvc-pod   1/1     Running   0          7m24s   10.244.36.75   k8s-node1   <none>           <none>
[root@k8s-master1 pv]# curl 10.244.36.75
hello world

  删除pod后,查看pvc的变化

[root@k8s-master1 pv]# kubectl delete pods pvc-pod
pod "pvc-pod" deleted
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc1   Bound    v2       2Gi        RWX                           28m   Filesystem
# 查看nfs服务器共享目录下是否有之前的数据
[root@k8s-master1 ~]# ll /data/volume_test/v2
total 4
-rw-r--r-- 1 root root 12 Sep 24 14:15 index.html
[root@k8s-master1 ~]# cat /data/volume_test/v2/index.html
hello world

  可以看到pod删除后,pvc仍然与pv绑定,共享目录下的数据依然存在。重新创建pod,依然可以正常访问,实现了数据的持久化

[root@k8s-master1 pv]# kubectl apply -f pod-pvc.yaml
pod/pvc-pod created
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pvc-pod   1/1     Running   0          7s    10.244.36.83   k8s-node1   <none>           <none>
[root@k8s-master1 pv]# curl 10.244.36.83
hello world
[root@k8s-master1 pv]#

  若删除pod和pvc后,查看共享目录下的数据是否依然存在

[root@k8s-master1 pv]# kubectl delete pods pvc-pod
pod "pvc-pod" deleted
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pods -o wide
No resources found in default namespace.
[root@k8s-master1 pv]# kubectl delete pvc my-pvc1
persistentvolumeclaim "my-pvc1" deleted
[root@k8s-master1 pv]# kubectl get pvc -o wide
No resources found in default namespace.
[root@k8s-master1 pv]# ls -lrt /data/volume_test/v2/
total 4
-rw-r--r-- 1 root root 12 Sep 24 14:15 index.html
[root@k8s-master1 pv]# cat /data/volume_test/v2/index.html
hello world
[root@k8s-master1 pv]#

  可以看到共享目录下的数据依然存在,不会因为pod和pvc的删除导致数据丢失。

3. 查看重建PVC绑定的PV变化  

  重建pvc,查看之前绑定的pv是否会被再次绑定

[root@k8s-master1 pv]# kubectl apply -f pvc-demo1.yaml
persistentvolumeclaim/my-pvc1 created
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc1   Bound    v4       4Gi        RWO,RWX                       7s    Filesystem
[root@k8s-master1 pv]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE   VOLUMEMODE
v1     1Gi        RWO            Retain           Available                                             62m   Filesystem
v10    10Gi       RWO,RWX        Retain           Available                                             62m   Filesystem
v2     2Gi        RWX            Delete           Failed      default/my-pvc1                           62m   Filesystem
v3     3Gi        ROX            Retain           Available                                             62m   Filesystem
v4     4Gi        RWO,RWX        Retain           Bound       default/my-pvc1                           62m   Filesystem
v5     5Gi        RWO,RWX        Retain           Available                                             62m   Filesystem
v6     6Gi        RWO,RWX        Retain           Available                                             62m   Filesystem
v7     7Gi        RWO,RWX        Retain           Available                                             62m   Filesystem
v8     8Gi        RWO,RWX        Retain           Available                                             62m   Filesystem
v9     9Gi        RWO,RWX        Retain           Available                                             62m   Filesystem

  可以看到重建的pvc绑定到了v4的pv上。

  删除pvc后,查看pv的状态变化,可以看到v4的pv状态变为Released

[root@k8s-master1 pv]# kubectl delete pvc my-pvc1
persistentvolumeclaim "my-pvc1" deleted
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE   VOLUMEMODE
v1     1Gi        RWO            Retain           Available                                             64m   Filesystem
v10    10Gi       RWO,RWX        Retain           Available                                             64m   Filesystem
v2     2Gi        RWX            Delete           Failed      default/my-pvc1                           64m   Filesystem
v3     3Gi        ROX            Retain           Available                                             64m   Filesystem
v4     4Gi        RWO,RWX        Retain           Released    default/my-pvc1                           64m   Filesystem
v5     5Gi        RWO,RWX        Retain           Available                                             64m   Filesystem
v6     6Gi        RWO,RWX        Retain           Available                                             64m   Filesystem
v7     7Gi        RWO,RWX        Retain           Available                                             64m   Filesystem
v8     8Gi        RWO,RWX        Retain           Available                                             64m   Filesystem
v9     9Gi        RWO,RWX        Retain           Available                                             64m   Filesystem

  再次创建pvc,依然不会绑定到状态为Released的PV上,而是绑定了可用的其他pv上

[root@k8s-master1 pv]# kubectl apply -f pvc-demo1.yaml
persistentvolumeclaim/my-pvc1 created
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc1   Bound    v5       5Gi        RWO,RWX                       5s    Filesystem
[root@k8s-master1 pv]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE   VOLUMEMODE
v1     1Gi        RWO            Retain           Available                                             68m   Filesystem
v10    10Gi       RWO,RWX        Retain           Available                                             68m   Filesystem
v2     2Gi        RWX            Delete           Failed      default/my-pvc1                           68m   Filesystem
v3     3Gi        ROX            Retain           Available                                             68m   Filesystem
v4     4Gi        RWO,RWX        Retain           Released    default/my-pvc1                           68m   Filesystem
v5     5Gi        RWO,RWX        Retain           Bound       default/my-pvc1                           68m   Filesystem
v6     6Gi        RWO,RWX        Retain           Available                                             68m   Filesystem
v7     7Gi        RWO,RWX        Retain           Available                                             68m   Filesystem
v8     8Gi        RWO,RWX        Retain           Available                                             68m   Filesystem
v9     9Gi        RWO,RWX        Retain           Available                                             68m   Filesystem
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE    VOLUMEMODE
my-pvc1   Bound    v5       5Gi        RWO,RWX                       2m9s   Filesystem
[root@k8s-master1 pv]#

 删除所有的pv和已有的pvc后,重建pv和pvc查看是否pvc是否还会与v2的pv绑定

[root@k8s-master1 pv]# kubectl delete pvc my-pvc1
persistentvolumeclaim "my-pvc1" deleted
[root@k8s-master1 pv]# kubectl delete -f pv-demo.yaml
persistentvolume "v1" deleted
persistentvolume "v2" deleted
persistentvolume "v3" deleted
persistentvolume "v4" deleted
persistentvolume "v5" deleted
persistentvolume "v6" deleted
persistentvolume "v7" deleted
persistentvolume "v8" deleted
persistentvolume "v9" deleted
persistentvolume "v10" deleted
[root@k8s-master1 pv]# kubectl apply -f pv-demo.yaml
persistentvolume/v1 created
persistentvolume/v2 created
persistentvolume/v3 created
persistentvolume/v4 created
persistentvolume/v5 created
persistentvolume/v6 created
persistentvolume/v7 created
persistentvolume/v8 created
persistentvolume/v9 created
persistentvolume/v10 created
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl apply -f pvc-demo1.yaml
persistentvolumeclaim/my-pvc1 created
[root@k8s-master1 pv]# kubectl get pv -o wide
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE   VOLUMEMODE
v1     1Gi        RWO            Retain           Available                                             12s   Filesystem
v10    10Gi       RWO,RWX        Retain           Available                                             12s   Filesystem
v2     2Gi        RWX            Delete           Bound       default/my-pvc1                           12s   Filesystem
v3     3Gi        ROX            Retain           Available                                             12s   Filesystem
v4     4Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
v5     5Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
v6     6Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
v7     7Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
v8     8Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
v9     9Gi        RWO,RWX        Retain           Available                                             12s   Filesystem
[root@k8s-master1 pv]# kubectl get pvc -o wide
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
my-pvc1   Bound    v2       2Gi        RWX                           11s   Filesystem

  重建pod,再次访问,查看数据是否依旧存在

[root@k8s-master1 pv]# kubectl apply -f pod-pvc.yaml
pod/pvc-pod created
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
pvc-pod   1/1     Running   0          8s    10.244.36.86   k8s-node1   <none>           <none>
[root@k8s-master1 pv]# curl 10.244.36.86
hello world
[root@k8s-master1 pv]#

  登录到该pod上,修改访问页面数据,存储目录上的数据也发生了变化

[root@k8s-master1 pv]# kubectl exec -it pvc-pod -- /bin/sh
# cd /usr/share/nginx/html
# cat index.html
hello world
# echo "test pvc" >index.html
# cat index.html
test pvc
# exit
You have new mail in /var/spool/mail/root
[root@k8s-master1 pv]# curl 10.244.36.86
test pvc
[root@k8s-master1 pv]# cat /data/volume_test/v2/index.html
test pvc
[root@k8s-master1 pv]#

4. 使用pvc和pv的注意事项

  1)每次创建pvc的时候,需要事先有划分好的pv,这样可能不方便,那么可以在创建pvc的时候直接动态创建一个pv这个存储类,pv事先是不存在的

  2)pvc和pv绑定,如果使用默认的回收策略retain,那么删除pvc之后,pv会处于released状态,想要继续使用这个pv,需要手动删除pv,kubectl delete pv pv_name,删除pv,不会删除pv里的数据,当重新创建pvc时还会和这个最匹配的pv绑定,数据还是原来数据,不会丢失。

标签:master1,存储,pv,volume,pvc,持久,k8s,root
From: https://www.cnblogs.com/jiawei2527/p/16725332.html

相关文章