创建资源池
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
文件系统需要两个资源池,一个用于存储数据体,一个用于存放索引信息及其他数据相关信息。
创建文件系统
ceph fs new cephfs cephfs_metadata cephfs_data
获取 admin 秘钥
ceph auth get-key client.admin | base64
这里输出的结果是直接进行 base64 加密的,方便后面直接使用。请记下这串字符。
K8s 操作部分
安装依赖组件
此操作需要在所有的【K8s 节点】中进行,包括【K8s 控制节点】。
yum install ceph-common
创建 ceph secret
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
namespace: default
type: "rbac.authorization.k8s.io"
data:
key: QVFDYStFcGRWT05OSkJBQWw5NTZwWHI5U3gwM0ZJQWdFR2hDTHc9PQ==
这里的 Key 部分填写上面【获取 admin 秘钥】部分输出的字符串,带引号。
这里的代码是 YAML,你可以直接把它复制到 Dashboard 的【创建】中执行。你也可以将代码保存成 xxx.yaml 文件,然后在控制节点上执行命令 kubectl create -f xxx.yaml。往下的内容也是如此。
部署 cephfs-provisoner
创建命名空间
apiVersion: v1
kind: Namespace
metadata:
name: cephfs
labels:
name: cephfs
创建服务账户
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: cephfs
创建角色
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: cephfs
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
创建集群角色
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
namespace: cephfs
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "delete"]
- apiGroups: ["policy"]
resourceNames: ["cephfs-provisioner"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
绑定角色
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: cephfs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
绑定集群角色
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: cephfs
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
部署 cephfs-provisioner
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: cephfs
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "quay.io/external_storage/cephfs-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
- "-disable-ceph-namespace-isolation=true"
serviceAccount: cephfs-provisioner
创建存储类
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 192.168.10.101:6789,192.168.10.102:6789,192.168.10.103:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: "kube-system"
claimRoot: /volumes/kubernetes
创建pv
cat cephfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv
labels:
pv: cephfs-pv
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
cephfs:
monitors:
- 10.121.116.20:6789
- 10.121.116.21:6789
- 10.121.116.22:6789
user: admin
secretRef:
name: ceph-secret
readOnly: false
persistentVolumeReclaimPolicy: Delete
创建pvc
cat cephfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: cephfs-pv
查看状态
[root@controller ceph]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/cephfs-pv 20Gi RWX Delete Bound default/cephfs-pvc 56m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/cephfs-pvc Bound cephfs-pv 20Gi RWX 56m
测试创建mysql 挂在cephfs
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: mysql
protocol: TCP
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: cephfs-testpod1
mountPath: /var/lib/mysql/
volumes:
- name: cephfs-testpod1
persistentVolumeClaim:
claimName: cephfs-pvc
linux 挂载cephfs 需要安装ceph-fuse
ceph-fuse /test
后边test是挂载到哪里
强制删除pvc
kubectl patch pvc/mosquitto-config2 -p '{"metadata":{"finalizers":null}}'
标签:kind,cephfs,name,文件系统,ceph,provisioner,k8s,metadata
From: https://www.cnblogs.com/zhanghn8/p/17952875