首页 > 其他分享 >Kubernetes——存储

Kubernetes——存储

时间:2023-06-19 17:57:40浏览次数:55  
标签:存储 k8s csi name Kubernetes ceph io rbd

目录

Volume分类

参考链接:
https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/
NFS生产极不建议用,因为单点故障概率高,推荐使用分布式存储或公有云的NAS服务/Ceph/GlusterFS(可能停止维护)/Minio等

示例1:通过emptyDir共享数据

删除pod后,emptyDir卷中的数据也会被删除。用途是用于Pod中不同的Container共享数据。比如Filebeat收集容器内程序产生的日志。

cat nginx-empty.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.2
        name: nginx1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /opt
          name: share-volume
      - image: nginx:1.15.2
        name: nginx2
        imagePullPolicy: IfNotPresent
        command:
        - sh
        - -c
        - sleep 3600
        volumeMounts:
        - mountPath: /mnt
          name: share-volume
      volumes:
      - name: share-volume
        emptyDir: {}

通过spec.volume字段配置了一个名字为share-volume,类型为emptyDir的volume,同时包含2个容器nginx1和nginx2,并将该volume挂载到了/opt和/mnt目录下,此时/opt和/mnt目录的数据就实现了共享。
示例2:使用HostPath挂载宿主机文件

HostPath卷可将节点上的文件或目录挂载到Pod上,用于实现Pod和宿主机之间的数据共享,常用的示例有挂载宿主机的时区至Pod,或者将Pod的日志文件挂载到宿主机等。

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.2
        name: nginx1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /opt
          name: share-volume
        - mountPath: /etc/localtime
          name: timezone
      - image: nginx:1.15.2
        name: nginx2
        imagePullPolicy: IfNotPresent
        command:
        - sh
        - -c
        - sleep 3600
        volumeMounts:
        - mountPath: /opt
          name: share-volume
      volumes:
      - name: share-volume
        emptyDir: {}
      - name: timezone
        hostPath:
          path: /etc/localtime
          type: File
示例3:挂载NFS至容器

NFS容易单点故障,性能也存在瓶颈。生产中建议使用分布式存储。公有云环境推荐NAS,性能更好,可用性更高。
所有Kubernetes节点都要安装nfs-utils才能正常挂载NFS。

# 服务端
[root@k8s-master-node1 ~/volume]# yum install rpcbind -y
[root@k8s-master-node1 ~/volume]# yum install nfs-utils.x86_64 -y
[root@k8s-master-node1 ~/volume]# cat /etc/exports         
/mnt/nfs_share *(rw,sync,no_root_squash)
[root@k8s-master-node1 ~/volume]# systemctl start rpcbind.service    
[root@k8s-master-node1 ~/volume]# systemctl enable rpcbind.service  
[root@k8s-master-node1 ~/volume]# systemctl start nfs
[root@k8s-master-node1 ~/volume]# systemctl enable nfs

# 客户端
[root@k8s-worker-node1 ~]# yum install nfs-utils.x86_64 -y    
[root@k8s-worker-node1 ~]# showmount -e 192.168.73.101
Export list for 192.168.73.101:
/mnt/nfs_share *
[root@k8s-worker-node1 ~]# mount -t nfs 192.168.73.101:/mnt/nfs_share /mnt/

[root@k8s-master-node1 ~/volume]# cat nginx-nfs.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.2
        name: nginx1
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /opt
          name: share-volume
      - image: nginx:1.15.2
        name: nginx2
        imagePullPolicy: IfNotPresent
        command:
        - sh
        - -c
        - sleep 3600
        volumeMounts:
        - mountPath: /opt
          name: share-volume
      volumes:
      - name: share-volume
        emptyDir: {}
      - name: nfs-volume
        nfs:
          server: 192.168.73.102
          path: /mnt/nfs_share 

PersistentVolume

参考链接:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/

基于NFS的PV

PV没有namespace隔离,不需要指定命名空间,在任意命名空间下创建的PV均可以在其他namespace使用。

cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /mnt/nfs_share
    server: 192.168.73.101
基于HostPath的PV
cat pv-hostpath.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
基于Ceph RBD的PV

Ceph,分布式存储,同时支持文件系统,块存储及对象存储,具有高可用性和读写高效性。
Ceph RBD类型的PV配置

cat pv-ceph.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-rbd-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
    - '192.168.73.101:6789'
    - '192.168.73.102:6789'
    pool: rbd
    image: ceph-rbd-pv-test 
    fsType: ext4
    readOnly: true
    user: admin
    secretRef:
      name: ceph-secret

PersistentVolumeClaim

参考链接:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

创建PVC

PVC具有命名空间隔离性,不指定namespace即创建

# 创建一个基于HostPath的PV
cat pv-hostpath.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

# 创建一个PVC与PV绑定,StorageClassName相同且其他参数一致才可以进行绑定
cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

# 创建一个PV为NFS类型的PVC
cat pvc-nfs.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs 
spec:
  storageClassName: nfs-slow 
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
使用PVC

将PVC挂载到Pod,只需要填写PVC名字即可,不需要关注存储细节。claimName需要和上述定义的PVC名称的task-pv-claim一致。

cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-volume
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-volume

动态存储StorageClass

参考链接:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
没有命名空间隔离性,用于自动管理PV的生命周期,比如创建、删除、自动扩容等。
创建一个PVC指向对应的StorageClass,StorageClass会自动创建PV供Pod使用;也可以用StatefulSet的volumeClaimTemplate自动为每个Pod申请一个PVC。

定义StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://127.0.0.1:8081"
  clusterid: "630372ccdc720a92c681fb928f27b53f"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
整合StorageClass和Ceph RBD

两者一起用可以为集群提供动态存储

# 1.部署Ceph RBD Provisioner
k create sa rbd-provisioner -n kube-system

cat provi-cephrbd.yaml 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "registry.cn-beijing.aliyuncs.com/dotbalo/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner

# 2.创建Ceph Pool,名为rbdfork128,pg_num为128的Pool
ceph osd pool create rbdfork8s 128
# 对新建的Pool单独授权
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=rbdfork8s'
# 初始化该Pool
ceph osd pool application enable rbdfork8s rbd
# 查看client.kube的key
ceph auth get-key client.kube
AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==
# 在k8s中创建具有该key的secret,供StorageClass配置使用
k create secret generic ceph-k8s-secret --type="kubernetes.io/rbd" --from-literal=key='AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==' -n kube-system
# 查看ceph admin的key
ceph auth get-key client.admin
AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==
# 将key保存在kubernetes的secret中
k create secret generic ceph-admin-secret --type="kubernetes.io/rbd" --from-literal=key='AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==' -n kube-system

# 3.创建storageclass
# 在ceph管理端查看monitor的节点信息
ceph mon dump 
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1
将打印的结果中的192.168.73.101 monitor的IP地址填写到storageclass中,一般为IP地址+6789端口

cat rbd-sc.yaml 
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.73.101:6789
  pool: rbdfork8s
  adminId: admin
  adminSecretNamespace: kube-system
  adminSecretName: ceph-admin-secret
  userId: kube
  userSecretNamespace: kube-system
  userSecretName: ceph-k8s-secret
  imageFormat: "2"
  imageFeatures: layering
# 查看详情
k get sc
NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
ceph-rbd        ceph.com/rbd                   Delete          Immediate              false                  26s

# 4.创建PVC
cat pvc-ceph.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-test 
spec:
  storageClassName: ceph-rbd 
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
CSI容器存储接口

out-of-free,在kubernetes核心代码外维护实现各类存储的接入,维护人员无须再关注存储细节,只需要关注kubernetes核心代码即可。
in-free,如:Volume、PV、StorageClass,在kubernetes内部维护存储的接入。

1.通过CSI连接CephFS

# 1.CephFS驱动安装
k create ns ceph-csi-cephfs
helm repo add ceph-csi https://ceph.github.io/csi-charts
git clone https://github.com/ceph/ceph-csi.git
helm install -n ceph-csi-cephfs "ceph-csi-cephfs" /root/ceph-csi/charts/ceph-csi-cephfs/
# 修改image地址
grep -w 'image:' -A 3 /root/ceph-csi/charts/ceph-csi-cephfs/values.yaml  
    image:
      repository: registry.aliyuncs.com/google_containers/csi-node-driver-registrar
      #repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
      tag: v2.8.0
--
    image:
      repository: quay.io/cephcsi/cephcsi
      tag: canary
      pullPolicy: IfNotPresent
--
    image:
      repository: registry.aliyuncs.com/google_containers/csi-provisioner
      tag: v3.5.0
      pullPolicy: IfNotPresent
--
    image:
      repository: registry.aliyuncs.com/google_containers/csi-resizer
      tag: v1.8.0
      pullPolicy: IfNotPresent
--
    image:
      repository: registry.aliyuncs.com/google_containers/csi-snapshotter
      tag: v6.2.2
      pullPolicy: IfNotPresent

# 查看CSI驱动Pod状态
k get pods -n ceph-csi-cephfs
NAME                                           READY   STATUS    RESTARTS   AGE
ceph-csi-cephfs-nodeplugin-tnw2l               3/3     Running   0          13s
ceph-csi-cephfs-nodeplugin-wb6v4               3/3     Running   0          13s
ceph-csi-cephfs-provisioner-556dfb6c7b-pcw4b   5/5     Running   0          13s
ceph-csi-cephfs-provisioner-556dfb6c7b-x54vr   5/5     Running   0          13s

# 2.ceph配置
# 创建一个文件系统类型的pool
ceph osd pool create sharefs-data0 128 128
ceph osd pool create sharefs-metadata 64 64
ceph fs new sharefs sharefs-metadata sharefs-data0

# 查看ceph配置
ceph fs ls
name: sharefs, metadata pool: sharefs-metadata, data pools: [sharefs-data0 ]

# kubernetes使用ceph作为后端存储,会对ceph集群的卷进行操作,kubernetes需要有类似创建、删除、更改的权限。
# 查看client.admin的key
ceph auth get-key client.admin
AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==

# 创建secret
k create secret generic csi-cephfs-secret --type="kubernetes.io/cephfs" --from-literal=adminKey="AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==" --from-literal=adminID='admin' --namespace=ceph-csi-cephfs

# 查看并记录集群的fsid
ceph fsid 
f0ec8c36-e69b-4146-aa0d-0080ba4d2698

# 查看ceph的monitors节点信息,monitor的ip+6789端口即为ceph monitor地址,之后在storageclass使用
ceph mon dump 
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1

# 3.创建文件共享型storageclass
cat ceph-configmap.yaml 
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "f0ec8c36-e69b-4146-aa0d-0080ba4d2698",
        "monitors": [
          "192.168.73.101:6789"
        ],
        "cephFS": {
          "subvolumeGroup": "cephfs-k8s-csi"
        }
      }
    ]
metadata:
  name: ceph-csi-config
  namespace: ceph-csi-cephfs

# 创建storageclass
cat cephfs-csi-sc.yaml 
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: f0ec8c36-e69b-4146-aa0d-0080ba4d2698 

  fsName: sharefs

  pool: sharefs-data0

  # The secrets have to contain user and/or Ceph admin credentials.
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs

  # (optional) The driver can use either ceph-fuse (fuse) or
  # ceph kernelclient (kernel).
  # If omitted, default volume mounter will be used - this is
  # determined by probing for ceph-fuse and mount.ceph
  # mounter: kernel

  # (optional) Prefix to use for naming subvolumes.
  # If omitted, defaults to "csi-vol-".
  # volumeNamePrefix: "foo-bar-"

reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

# 4.Ceph CSI验证
# 创建一个PVC指向该StorageClass,查看是否能正常创建PV
cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc-test-csi 
spec:
  storageClassName: csi-cephfs-sc 
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
# 此时会自动创建pv,pvc的状态变为Bound

# 创建pod数据读写测试,将pv挂载到/mnt目录
cat test-pvc-dp.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test-cephfs
  name: test-cephfs
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-cephfs
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: test-cephfs
    spec:
      containers:
      - command:
        - sh
        - -c
        - sleep 36000
        image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
        name: test-cephfs
        volumeMounts:
        - mountPath: /mnt
          name: cephfs-pvc-test
      volumes:
      - name: cephfs-pvc-test
        persistentVolumeClaim:
          claimName: cephfs-pvc-test-csi
# 进入pod,在/mnt目录下新建文件,测试是否可以新建成功。

2.通过CSI连接Ceph RBD
helm参考链接:https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/README.md

# 1.Ceph RBD驱动安装
k create namespace "ceph-csi-rbd"
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm install ceph-csi-rbd /root/ceph-csi/charts/ceph-csi-rbd/ -n ceph-csi-rbd
k get pods -n ceph-csi-rbd
NAME                                       READY   STATUS    RESTARTS   AGE
ceph-csi-rbd-nodeplugin-bhqhr              3/3     Running   0          5s
ceph-csi-rbd-nodeplugin-lcs2j              3/3     Running   0          5s
ceph-csi-rbd-provisioner-79cd69ffb-5w22z   7/7     Running   0          5s
ceph-csi-rbd-provisioner-79cd69ffb-vvpgr   7/7     Running   0          5s

# 2.Ceph配置
# 创建ceph Pool
ceph osd pool create rbdfork8s 128
# 创建一个kube用户,对osd有查看权限,对rbdfork8s有读写权限
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=rbdfork8s'
# 初始化pool
ceph osd pool application enable rbdfook8s rbd
rbd pool init rbdfork8s
# 查看client.kube的key
ceph auth get-key client.kube
AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==
# 创建secret
k create secret generic csi-rbd-secret --type="kubernetes.io/rbd" --from-literal=userKey='AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==' --from-literal=userID='kube' --namespace=ceph-csi-rbd
# 查看并记录集群fsid
ceph fsid 
f0ec8c36-e69b-4146-aa0d-0080ba4d2698
# 查看ceph monitors节点信息
ceph mon dump 
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1

# 3.创建storageclass
# 创建CephFS CSI的storageclass配置
cat ceph-configmap.yaml 
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    [
      {
        "clusterID": "f0ec8c36-e69b-4146-aa0d-0080ba4d2698",
        "monitors": [
          "192.168.73.101:6789"
        ],
        "cephFS": {
          "subvolumeGroup": "cephfs-k8s-csi"
        }
      }
    ]
metadata:
  name: ceph-csi-config
  namespace: ceph-csi-rbd

# 创建storageclass
cat rbd-csi-sc.yaml 
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: f0ec8c36-e69b-4146-aa0d-0080ba4d2698 
   pool: rbdfork8s
   imageFeatures: layering

   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd 
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
   - discard

# 4.Ceph RBD验证
# 创建pvc,看能否自动创建pv
cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-test-csi 
spec:
  storageClassName: csi-rbd-sc 
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

# 创建deployment挂载该pvc
cat test-pvc-dp.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test-rbd
  name: test-rbd
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-rbd
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: test-rbd
    spec:
      containers:
      - command:
        - sh
        - -c
        - sleep 36000
        image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
        name: test-rbd
        volumeMounts:
        - mountPath: /mnt
          name: rbd-pvc-test
      volumes:
      - name: rbd-pvc-test
        persistentVolumeClaim:
          claimName: rbd-pvc-test-csi
# 进入pod,在/mnt目录下新建文件,测试是否可以新建成功。

存储分离,把有状态应用无状态化,把存储交给远端。对于文件型存储,可以使用对象存储的方式将文件直接存储在对象存储平台上。对于缓存数据,可以使用类似Redis的中间件进行存取。这样程序会变成无状态形式,无论部署、重启、迁移都不会造成数据丢失。

标签:存储,k8s,csi,name,Kubernetes,ceph,io,rbd
From: https://www.cnblogs.com/even160941/p/17491764.html

相关文章

  • 云上使用 Stable Diffusion ,模型数据如何共享和存储
    随着人工智能技术的爆发,内容生成式人工智能(AIGC)成为了当下热门领域。除了ChatGPT之外,文本生成图像技术更令人惊艳。StableDiffusion,是一款开源的深度学习模型。与Midjourney提供的直接将文本转化为图像的服务不同的是它允许用户自行搭配并训练自己的图像风格,这一特性吸引了......
  • (转)kubernetes(k8s) yaml 文件详解2
    原文:https://juejin.cn/post/7202145223014252602#heading-0一、K8S支持的文件格式kubernetes支持YAML和JSON文件格式管理资源对象。JSON格式:主要用于api接口之间消息的传递YAML格式:用于配置和管理,YAML是一种简洁的非标记性语言,内容格式人性化,较易读1、yaml和json的主要区......
  • kubernetes入门
    一、kubernetes简介kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,kubernetes的目标是让部署容器化的应用简单并且高效(powerful),kubernetes提供了应用部署,规划,更新,维护的一种机制。kubernetes的目标是让部署容器化的应用简单并且高效(powerful),kubernetes提供了......
  • MinIO对象存储
    是一种对象存储解决方案,提供与亚马逊云科技兼容的接口,并支持所有核心功能。MinIO专为部署在任何地方而构建的公共云或私有云、裸机基础架构、编排环境和边缘基础架构,虽然轻量,却拥有着不错的性能。MinIO的核心优势在于高度可扩展性和耐久性。它可以轻松地扩展到数百个节点,支持PB级......
  • mysql 可以重复执行的表结构修改存储过程
    mysql可以重复执行的表结构修改存储过程当多个数据库要执行同一个sql,但是在其中有一个数据库失败需要重新执行,那么就要保证执行的数据库是可以重复执行的了,下面就是可以重复执行的存储过程,收藏起来DELIMITER;;CREATEPROCEDURE`AddColumnIfNotExists`( tableNameVARCHAR......
  • MySQL表类型和存储引擎
    基本介绍MySQL的表类型由存储引擎决定,主要包括MyISAM、innoDB、Memory等MySQL数据表主要支持六种类型,分别是:CSV、Memory、ARCHIVE、MRG_MYISAM、MYISAM、InnoDB这六种又分为两类,一类是“事务安全型”,比如:InnoDB;其余都属于第二类,称为“非事务安全型”细节说明MyISAM不支持事务、也不......
  • mysql 存储过程实例
    mysql存储过程实例1. 存储过程-递归查询数据字典树数据CREATEDEFINER=`lihongyuan`@`%`PROCEDURE`GetDataDictionaryTree`(intkeyvarchar(200))BEGINWITHRECURSIVEcteAS(SELECTid,name,ParentId,`Order`FROMlbd_app......
  • 为什么要学习 Kubernetes
    Kubernetes是一种开源的容器编排平台,它可以自动化地部署、扩展和管理容器化的应用程序。它可以帮助开发人员和运维人员更高效地管理大规模的容器化应用,从而提高应用程序的可靠性和可伸缩性。K8s的设计理念是让开发人员专注于应用程序本身,而不是底层基础设施的管理。它是一个非常有......
  • Kubernetes集群认证管理
    Kubernetes集群中所有资源的访问和变更都是通过Kubernetes API Server的REST API来实现的,所以集群安全的关键就在于如何识别并认证客户端身份(Authentication),以及随后访问权限的授权(Authorization)这两个问题。Kubernetes集群提供了3种级别的客户端身份认证方式。1.最严格的HT......
  • STL vector容器存储键值对
    在阅读tvm源码时,发现了一个挺有意思的代码:std::vector<std::pair<std::string,ObjectRef>>update;vector容器里竟然存储的是键值对,amazing啊!!!还是第一次遇到这种写法的,这与直接写成map有啥不一样呢?首先,这两种方式都可以用于存储键值对,只是它们具有不同的特性和实用场景。s......