目录
Volume分类
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/
NFS生产极不建议用,因为单点故障概率高,推荐使用分布式存储或公有云的NAS服务/Ceph/GlusterFS(可能停止维护)/Minio等
示例1:通过emptyDir共享数据
删除pod后,emptyDir卷中的数据也会被删除。用途是用于Pod中不同的Container共享数据。比如Filebeat收集容器内程序产生的日志。
cat nginx-empty.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
name: nginx1
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /opt
name: share-volume
- image: nginx:1.15.2
name: nginx2
imagePullPolicy: IfNotPresent
command:
- sh
- -c
- sleep 3600
volumeMounts:
- mountPath: /mnt
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
通过spec.volume字段配置了一个名字为share-volume,类型为emptyDir的volume,同时包含2个容器nginx1和nginx2,并将该volume挂载到了/opt和/mnt目录下,此时/opt和/mnt目录的数据就实现了共享。
示例2:使用HostPath挂载宿主机文件
HostPath卷可将节点上的文件或目录挂载到Pod上,用于实现Pod和宿主机之间的数据共享,常用的示例有挂载宿主机的时区至Pod,或者将Pod的日志文件挂载到宿主机等。
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
name: nginx1
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /opt
name: share-volume
- mountPath: /etc/localtime
name: timezone
- image: nginx:1.15.2
name: nginx2
imagePullPolicy: IfNotPresent
command:
- sh
- -c
- sleep 3600
volumeMounts:
- mountPath: /opt
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
- name: timezone
hostPath:
path: /etc/localtime
type: File
示例3:挂载NFS至容器
NFS容易单点故障,性能也存在瓶颈。生产中建议使用分布式存储。公有云环境推荐NAS,性能更好,可用性更高。
所有Kubernetes节点都要安装nfs-utils才能正常挂载NFS。
# 服务端
[root@k8s-master-node1 ~/volume]# yum install rpcbind -y
[root@k8s-master-node1 ~/volume]# yum install nfs-utils.x86_64 -y
[root@k8s-master-node1 ~/volume]# cat /etc/exports
/mnt/nfs_share *(rw,sync,no_root_squash)
[root@k8s-master-node1 ~/volume]# systemctl start rpcbind.service
[root@k8s-master-node1 ~/volume]# systemctl enable rpcbind.service
[root@k8s-master-node1 ~/volume]# systemctl start nfs
[root@k8s-master-node1 ~/volume]# systemctl enable nfs
# 客户端
[root@k8s-worker-node1 ~]# yum install nfs-utils.x86_64 -y
[root@k8s-worker-node1 ~]# showmount -e 192.168.73.101
Export list for 192.168.73.101:
/mnt/nfs_share *
[root@k8s-worker-node1 ~]# mount -t nfs 192.168.73.101:/mnt/nfs_share /mnt/
[root@k8s-master-node1 ~/volume]# cat nginx-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
name: nginx1
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /opt
name: share-volume
- image: nginx:1.15.2
name: nginx2
imagePullPolicy: IfNotPresent
command:
- sh
- -c
- sleep 3600
volumeMounts:
- mountPath: /opt
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
- name: nfs-volume
nfs:
server: 192.168.73.102
path: /mnt/nfs_share
PersistentVolume
参考链接:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/
基于NFS的PV
PV没有namespace隔离,不需要指定命名空间,在任意命名空间下创建的PV均可以在其他namespace使用。
cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/nfs_share
server: 192.168.73.101
基于HostPath的PV
cat pv-hostpath.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
基于Ceph RBD的PV
Ceph,分布式存储,同时支持文件系统,块存储及对象存储,具有高可用性和读写高效性。
Ceph RBD类型的PV配置
cat pv-ceph.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-rbd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- '192.168.73.101:6789'
- '192.168.73.102:6789'
pool: rbd
image: ceph-rbd-pv-test
fsType: ext4
readOnly: true
user: admin
secretRef:
name: ceph-secret
PersistentVolumeClaim
参考链接:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
创建PVC
PVC具有命名空间隔离性,不指定namespace即创建
# 创建一个基于HostPath的PV
cat pv-hostpath.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
# 创建一个PVC与PV绑定,StorageClassName相同且其他参数一致才可以进行绑定
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# 创建一个PV为NFS类型的PVC
cat pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs
spec:
storageClassName: nfs-slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
使用PVC
将PVC挂载到Pod,只需要填写PVC名字即可,不需要关注存储细节。claimName需要和上述定义的PVC名称的task-pv-claim一致。
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-volume
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-volume
动态存储StorageClass
参考链接:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
没有命名空间隔离性,用于自动管理PV的生命周期,比如创建、删除、自动扩容等。
创建一个PVC指向对应的StorageClass,StorageClass会自动创建PV供Pod使用;也可以用StatefulSet的volumeClaimTemplate自动为每个Pod申请一个PVC。
定义StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081"
clusterid: "630372ccdc720a92c681fb928f27b53f"
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:3"
整合StorageClass和Ceph RBD
两者一起用可以为集群提供动态存储
# 1.部署Ceph RBD Provisioner
k create sa rbd-provisioner -n kube-system
cat provi-cephrbd.yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "registry.cn-beijing.aliyuncs.com/dotbalo/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
# 2.创建Ceph Pool,名为rbdfork128,pg_num为128的Pool
ceph osd pool create rbdfork8s 128
# 对新建的Pool单独授权
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=rbdfork8s'
# 初始化该Pool
ceph osd pool application enable rbdfork8s rbd
# 查看client.kube的key
ceph auth get-key client.kube
AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==
# 在k8s中创建具有该key的secret,供StorageClass配置使用
k create secret generic ceph-k8s-secret --type="kubernetes.io/rbd" --from-literal=key='AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==' -n kube-system
# 查看ceph admin的key
ceph auth get-key client.admin
AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==
# 将key保存在kubernetes的secret中
k create secret generic ceph-admin-secret --type="kubernetes.io/rbd" --from-literal=key='AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==' -n kube-system
# 3.创建storageclass
# 在ceph管理端查看monitor的节点信息
ceph mon dump
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1
将打印的结果中的192.168.73.101 monitor的IP地址填写到storageclass中,一般为IP地址+6789端口
cat rbd-sc.yaml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
monitors: 192.168.73.101:6789
pool: rbdfork8s
adminId: admin
adminSecretNamespace: kube-system
adminSecretName: ceph-admin-secret
userId: kube
userSecretNamespace: kube-system
userSecretName: ceph-k8s-secret
imageFormat: "2"
imageFeatures: layering
# 查看详情
k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd ceph.com/rbd Delete Immediate false 26s
# 4.创建PVC
cat pvc-ceph.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-test
spec:
storageClassName: ceph-rbd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
CSI容器存储接口
out-of-free,在kubernetes核心代码外维护实现各类存储的接入,维护人员无须再关注存储细节,只需要关注kubernetes核心代码即可。
in-free,如:Volume、PV、StorageClass,在kubernetes内部维护存储的接入。
1.通过CSI连接CephFS
# 1.CephFS驱动安装
k create ns ceph-csi-cephfs
helm repo add ceph-csi https://ceph.github.io/csi-charts
git clone https://github.com/ceph/ceph-csi.git
helm install -n ceph-csi-cephfs "ceph-csi-cephfs" /root/ceph-csi/charts/ceph-csi-cephfs/
# 修改image地址
grep -w 'image:' -A 3 /root/ceph-csi/charts/ceph-csi-cephfs/values.yaml
image:
repository: registry.aliyuncs.com/google_containers/csi-node-driver-registrar
#repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.8.0
--
image:
repository: quay.io/cephcsi/cephcsi
tag: canary
pullPolicy: IfNotPresent
--
image:
repository: registry.aliyuncs.com/google_containers/csi-provisioner
tag: v3.5.0
pullPolicy: IfNotPresent
--
image:
repository: registry.aliyuncs.com/google_containers/csi-resizer
tag: v1.8.0
pullPolicy: IfNotPresent
--
image:
repository: registry.aliyuncs.com/google_containers/csi-snapshotter
tag: v6.2.2
pullPolicy: IfNotPresent
# 查看CSI驱动Pod状态
k get pods -n ceph-csi-cephfs
NAME READY STATUS RESTARTS AGE
ceph-csi-cephfs-nodeplugin-tnw2l 3/3 Running 0 13s
ceph-csi-cephfs-nodeplugin-wb6v4 3/3 Running 0 13s
ceph-csi-cephfs-provisioner-556dfb6c7b-pcw4b 5/5 Running 0 13s
ceph-csi-cephfs-provisioner-556dfb6c7b-x54vr 5/5 Running 0 13s
# 2.ceph配置
# 创建一个文件系统类型的pool
ceph osd pool create sharefs-data0 128 128
ceph osd pool create sharefs-metadata 64 64
ceph fs new sharefs sharefs-metadata sharefs-data0
# 查看ceph配置
ceph fs ls
name: sharefs, metadata pool: sharefs-metadata, data pools: [sharefs-data0 ]
# kubernetes使用ceph作为后端存储,会对ceph集群的卷进行操作,kubernetes需要有类似创建、删除、更改的权限。
# 查看client.admin的key
ceph auth get-key client.admin
AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==
# 创建secret
k create secret generic csi-cephfs-secret --type="kubernetes.io/cephfs" --from-literal=adminKey="AQC/b4lkvY22NhAAfcgFtkEyWBoVSl8g1fKOpg==" --from-literal=adminID='admin' --namespace=ceph-csi-cephfs
# 查看并记录集群的fsid
ceph fsid
f0ec8c36-e69b-4146-aa0d-0080ba4d2698
# 查看ceph的monitors节点信息,monitor的ip+6789端口即为ceph monitor地址,之后在storageclass使用
ceph mon dump
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1
# 3.创建文件共享型storageclass
cat ceph-configmap.yaml
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "f0ec8c36-e69b-4146-aa0d-0080ba4d2698",
"monitors": [
"192.168.73.101:6789"
],
"cephFS": {
"subvolumeGroup": "cephfs-k8s-csi"
}
}
]
metadata:
name: ceph-csi-config
namespace: ceph-csi-cephfs
# 创建storageclass
cat cephfs-csi-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: f0ec8c36-e69b-4146-aa0d-0080ba4d2698
fsName: sharefs
pool: sharefs-data0
# The secrets have to contain user and/or Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (optional) The driver can use either ceph-fuse (fuse) or
# ceph kernelclient (kernel).
# If omitted, default volume mounter will be used - this is
# determined by probing for ceph-fuse and mount.ceph
# mounter: kernel
# (optional) Prefix to use for naming subvolumes.
# If omitted, defaults to "csi-vol-".
# volumeNamePrefix: "foo-bar-"
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
# 4.Ceph CSI验证
# 创建一个PVC指向该StorageClass,查看是否能正常创建PV
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc-test-csi
spec:
storageClassName: csi-cephfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
# 此时会自动创建pv,pvc的状态变为Bound
# 创建pod数据读写测试,将pv挂载到/mnt目录
cat test-pvc-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-cephfs
name: test-cephfs
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-cephfs
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: test-cephfs
spec:
containers:
- command:
- sh
- -c
- sleep 36000
image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
name: test-cephfs
volumeMounts:
- mountPath: /mnt
name: cephfs-pvc-test
volumes:
- name: cephfs-pvc-test
persistentVolumeClaim:
claimName: cephfs-pvc-test-csi
# 进入pod,在/mnt目录下新建文件,测试是否可以新建成功。
2.通过CSI连接Ceph RBD
helm参考链接:https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/README.md
# 1.Ceph RBD驱动安装
k create namespace "ceph-csi-rbd"
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm install ceph-csi-rbd /root/ceph-csi/charts/ceph-csi-rbd/ -n ceph-csi-rbd
k get pods -n ceph-csi-rbd
NAME READY STATUS RESTARTS AGE
ceph-csi-rbd-nodeplugin-bhqhr 3/3 Running 0 5s
ceph-csi-rbd-nodeplugin-lcs2j 3/3 Running 0 5s
ceph-csi-rbd-provisioner-79cd69ffb-5w22z 7/7 Running 0 5s
ceph-csi-rbd-provisioner-79cd69ffb-vvpgr 7/7 Running 0 5s
# 2.Ceph配置
# 创建ceph Pool
ceph osd pool create rbdfork8s 128
# 创建一个kube用户,对osd有查看权限,对rbdfork8s有读写权限
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=rbdfork8s'
# 初始化pool
ceph osd pool application enable rbdfook8s rbd
rbd pool init rbdfork8s
# 查看client.kube的key
ceph auth get-key client.kube
AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==
# 创建secret
k create secret generic csi-rbd-secret --type="kubernetes.io/rbd" --from-literal=userKey='AQBleolk1HmcNBAA4ZxUiXH9zJTi+R2QRrJ9Tg==' --from-literal=userID='kube' --namespace=ceph-csi-rbd
# 查看并记录集群fsid
ceph fsid
f0ec8c36-e69b-4146-aa0d-0080ba4d2698
# 查看ceph monitors节点信息
ceph mon dump
dumped monmap epoch 1
epoch 1
fsid f0ec8c36-e69b-4146-aa0d-0080ba4d2698
last_changed 2023-06-14 15:33:13.636124
created 2023-06-14 15:33:13.636124
0: 192.168.73.101:6789/0 mon.k8s-master-node1
# 3.创建storageclass
# 创建CephFS CSI的storageclass配置
cat ceph-configmap.yaml
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "f0ec8c36-e69b-4146-aa0d-0080ba4d2698",
"monitors": [
"192.168.73.101:6789"
],
"cephFS": {
"subvolumeGroup": "cephfs-k8s-csi"
}
}
]
metadata:
name: ceph-csi-config
namespace: ceph-csi-rbd
# 创建storageclass
cat rbd-csi-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: f0ec8c36-e69b-4146-aa0d-0080ba4d2698
pool: rbdfork8s
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
# 4.Ceph RBD验证
# 创建pvc,看能否自动创建pv
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-test-csi
spec:
storageClassName: csi-rbd-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
# 创建deployment挂载该pvc
cat test-pvc-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-rbd
name: test-rbd
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-rbd
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: test-rbd
spec:
containers:
- command:
- sh
- -c
- sleep 36000
image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
name: test-rbd
volumeMounts:
- mountPath: /mnt
name: rbd-pvc-test
volumes:
- name: rbd-pvc-test
persistentVolumeClaim:
claimName: rbd-pvc-test-csi
# 进入pod,在/mnt目录下新建文件,测试是否可以新建成功。
存储分离,把有状态应用无状态化,把存储交给远端。对于文件型存储,可以使用对象存储的方式将文件直接存储在对象存储平台上。对于缓存数据,可以使用类似Redis的中间件进行存取。这样程序会变成无状态形式,无论部署、重启、迁移都不会造成数据丢失。
标签:存储,k8s,csi,name,Kubernetes,ceph,io,rbd From: https://www.cnblogs.com/even160941/p/17491764.html