目录
前言
NFS Subdir External Provisioner
可以使用现有的 NFS 服务器动态创建 pv 和 pvc
部署 NFS server
镜像准备
这块可以看我之前的博客,这里就不啰嗦了
节点打标签
采用 hostpath 的方式来持久化 NFS 的共享目录,需要绑定节点不让 NFS 飘移
k label node 192.168.22.124 nfs-server=true
启动 NFS server
- 这里记录两个问题
- NFS 的配置文件,根目录或者说第一个共享目录,需要加上 fsid=0 ,然后挂载的时候直接使用 / ,如果不加 fsid=0,挂载会报错找不到文件或目录,细节什么的,可以看一下官方的手册:exports
- 因为需要本地宿主机挂载 NFS 共享目录到 kubelet 的目录下面,宿主机就没办法使用 svc 的方式来挂载,除非本地 DNS 服务器包含了 k8s 集群内的 DNS,我这边就暂时使用指定的 clusterIP 地址来创建 svc,集群内直接使用 svc 的 ip 地址来挂载 NFS
- 关于 clusterip 的 ip 范围,需要看 apiserver 的
--service-cluster-ip-range
参数,一般都是10.96.0.0/12
,可用的范围在10.96.0.0
到10.111.255.255
之间,找一个集群内不存在的 ip 来用就行- Service ClusterIP 分配
- exports 里面要把 node 节点的 ip 网段,svc 的网段和 pod 的网段都写进去,如果嫌烦,也可以直接写
*
,只要不是对外暴露的,问题不是很大
---
apiVersion: v1
data:
exports: |
/nfs-share-data 192.168.22.0/24(rw,fsid=0,sync,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
/nfs-share-data 10.96.0.0/12(rw,fsid=0,sync,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
/nfs-share-data 172.22.0.0/16(rw,fsid=0,sync,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
kind: ConfigMap
metadata:
name: nfs-server-cm
namespace: storage
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: nfs-server
name: nfs-server-svc
namespace: storage
spec:
clusterIP: 10.111.111.111
ports:
- name: tcp
port: 2049
targetPort: tcp
selector:
app.kubernetes.io/name: nfs-server
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: nfs-server
name: nfs-server
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
strategy:
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nfs-server
operator: In
values:
- "true"
containers:
- env:
- name: SHARED_DIRECTORY
value: /nfs-share-data
image: nfs-server-2.6.4:alpine-3.20
imagePullPolicy: IfNotPresent
name: nfs-server
ports:
- containerPort: 2049
name: tcp
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
capabilities:
add:
- SYS_ADMIN
volumeMounts:
- mountPath: /nfs-share-data
name: nfs-share-data
- mountPath: /etc/exports
name: nfs-config
subPath: exports
volumes:
- hostPath:
path: /approot/k8s_data/nfs-share-data
type: DirectoryOrCreate
name: nfs-share-data
- configMap:
name: nfs-server-cm
name: nfs-config
创建 pv 验证
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.111.111.111
path: "/"
创建 pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
创建 pod 挂载验证
---
apiVersion: v1
kind: Pod
metadata:
name: nfs-client
spec:
containers:
- name: app
image: m.daocloud.io/busybox:1.37
command: ["sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- name: nfs-storage
mountPath: /mnt/nfs
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-pvc
如果 pod 启动有类似如下的报错,可以在 k8s 节点上安装一下
nfs-utils
Warning FailedMount 1s (x7 over 33s) kubelet MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs nfs-server-svc.storage.svc.cluster.local:/nfs-share-data /var/lib/kubelet/pods/9e7abc6f-573c-4c3f-b023-cdceee95722a/volumes/kubernetes.io~nfs/nfs-pv
Output: mount: /var/lib/kubelet/pods/9e7abc6f-573c-4c3f-b023-cdceee95722a/volumes/kubernetes.io~nfs/nfs-pv: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
部署 NFS Subdir External Provisioner
官方也有 helm 的文档,需要用 helm 的,可以直接看官方的:NFS Subdirectory External Provisioner Helm Chart
我这边采用 yaml 编排来部署
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nfs-subdir-external-provisioner
name: nfs-subdir-external-provisioner-sa
namespace: storage
---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
app: nfs-subdir-external-provisioner
name: nfs-client
parameters:
archiveOnDelete: "true"
pathPattern: /
provisioner: cluster.local/nfs-subdir-external-provisioner
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: nfs-subdir-external-provisioner
name: nfs-subdir-external-provisioner-runner
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- get
- list
- watch
- update
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: nfs-subdir-external-provisioner
name: run-nfs-subdir-external-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nfs-subdir-external-provisioner-runner
subjects:
- kind: ServiceAccount
name: nfs-subdir-external-provisioner-sa
namespace: storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: nfs-subdir-external-provisioner
name: leader-locking-nfs-subdir-external-provisioner
namespace: storage
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
- create
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: nfs-subdir-external-provisioner
name: leader-locking-nfs-subdir-external-provisioner
namespace: storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-locking-nfs-subdir-external-provisioner
subjects:
- kind: ServiceAccount
name: nfs-subdir-external-provisioner-sa
namespace: storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-subdir-external-provisioner
name: nfs-subdir-external-provisioner
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
app: nfs-subdir-external-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-subdir-external-provisioner
spec:
containers:
- env:
- name: PROVISIONER_NAME
value: cluster.local/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.111.111.111
- name: NFS_PATH
value: /
image: docker.m.daocloud.io/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
imagePullPolicy: IfNotPresent
name: nfs-subdir-external-provisioner
volumeMounts:
- mountPath: /persistentvolumes
name: nfs-subdir-external-provisioner-root
serviceAccountName: nfs-subdir-external-provisioner-sa
volumes:
- name: nfs-subdir-external-provisioner-root
nfs:
path: /
server: 10.111.111.111
创建 pod 验证
提前创建 pvc 的方式
创建 pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
创建 pod
由于 NFS 这块是直接走的共享目录的根目录,会比较乱,因此 pod 增加了变量,再通过
volumeMounts.subPathExpr
将共享数据存到 pod 名字的目录下
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: m.daocloud.io/busybox:1.37
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/hello && exit 0 || exit 1"
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
subPathExpr: $(POD_NAME)
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
可以通过之前创建的 nfs-client 来验证是否创建了 hello 这个文件
kubectl exec -it nfs-client -- ls /mnt/nfs/test-pod/
使用 volumeClaimTemplates 的方式
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-sts
spec:
replicas: 1
selector:
matchLabels:
app: test-sts
template:
metadata:
labels:
app: test-sts
spec:
containers:
- name: test-sts
image: m.daocloud.io/busybox:1.37
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: nfs-sts-pvc
mountPath: "/mnt"
subPathExpr: $(POD_NAME)
restartPolicy: "Always"
volumeClaimTemplates:
- metadata:
name: nfs-sts-pvc
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
同样可以通过之前创建的 nfs-client 来验证是否创建了 SUCCESS 这个文件
kubectl exec -it nfs-client -- ls /mnt/nfs/test-sts-0
标签:name,server,1.28,nfs,subdir,provisioner,NFS,external
From: https://www.cnblogs.com/chen2ha/p/18503423