环境
3台centos 8机器,每台机器上边3个磁盘
机器名:ceph1、ceph2、ceph3
ceph-ansible集群部署
在ceph1上边准备好ceph-ansible
git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible
git checkout stable-5.0 #centos 7用4.0
pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
echo "PATH=\$PATH:/usr/local/bin" >>~/.bashrc
source ~/.bashrc
ansible --version #正常看到版本号,说明部署成功
## 注意点
## 1. 配置ceph1免密登陆ceph2、ceph3(ceph1本身也需免密)
## 2. 防火墙需关闭、时间记得查看是否同步
修改ceph-ansible的环境变量文件和hosts
cp group_vars/all.yml.sample group_vars/all.yml
cat all.yml|grep -v ^#|grep -v ^$
---
dummy:
ceph_release_num: 15
cluster: ceph
mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mds_group_name: mdss
mgr_group_name: mgrs
ntp_service_enabled: true
ntp_daemon_type: chronyd
ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: octopus
monitor_interface: eno3
journal_size: 10240 # OSD journal size in MB
public_network: 0.0.0.0/0
radosgw_interface: eno3
dashboard_admin_user: admin
dashboard_admin_password: xxxxxxxxxx
grafana_admin_user: admin
grafana_admin_password: xxxxxxxxxx
注意:
- 需修改 monitor_interface/radosgw_interface 为目标主机默认网卡名,如 bond0
- 目标主机需要安装组件 yum -y install ca-certificates
- python3 -m pip install six pyyaml
- ceph_release_num 根据系统版本修改,CentOS 7 为 14,跟 stable-4.0 对应为 ceph nautilus 版本
- ceph_stable_release 根据系统版本修改,CentOS 7 为 14,跟 stable-4.0 对应为 ceph nautilus 版本
- public_network 根据系统所在 IP 地址段修改,例如:192.168.0.0/16
cp group_vars/osds.yml.sample group_vars/osds.yml
cat osds.yml|grep -v ^#|grep -v ^$
---
dummy:
copy_admin_key: true
devices:
- /dev/sdb
- /dev/sdc
- /dev/sdd
hosts.yml
# Ceph admin user for SSH and Sudo
[all:vars]
ansible_ssh_user=root
ansible_become=true
ansible_become_method=sudo
ansible_become_user=root
# Ceph Monitor Nodes
[mons]
ceph1
ceph2
ceph3
[mdss]
ceph1
ceph2
ceph3
[rgws]
ceph1
ceph2
ceph3
[osds]
ceph1
ceph2
ceph3
[mgrs]
ceph1
ceph2
ceph3
[grafana-server]
ceph1
site.yml
- hosts:
- mons
- osds
- mdss
- rgws
#- nfss
#- rbdmirrors
#- clients
- mgrs
#- iscsigws
#- iscsi-gws # for backward compatibility only!
- grafana-server
#- rgwloadbalancers
部署ceph集群
ansible-playbook -i hosts site.yml
执行成功输出如下:
卸载ceph集群
cd /usr/local/ceph-ansible
ansible-playbook -i hosts infrastructure-playbooks/purge-cluster.yml
yum list installed | grep ceph
部署完成后检查
ceph df
ceph osd df
新增 osds node
将新添加的 osds node 添加到 hosts 文件 [osds] 区域,然后执行
ansible-playbook -vv -i hosts site-container.yml --limit {new osds node}
kubernetes+ceph
使用rbd存储
配置storageclass
# 在k8s集群中需要用到ceph的节点上安装好ceph-common(有内核要求,这个需要注意)
# 需要使用kubelet使用rdb命令map附加rbd创建的image
yum install -y ceph-common
# 创建osd pool(在ceph的mon节点)
ceph osd pool create kube 128
ceph osd pool ls
# 创建k8s访问ceph的用户(在ceph的mon节点)
cd /etc/ceph
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
# 查看key 在ceph的mon或者admin节点
ceph auth get-key client.admin
ceph auth get-key client.kube
# 创建admin的secret
CEPH_ADMIN_SECRET='xxxxxxxxxxxxxxxxxxxx=='
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=$CEPH_ADMIN_SECRET \
--namespace=kube-system
# 在xxx-system命名空间创建pvc用于访问ceph的secret
CEPH_KUBE_SECRET='xxxxxxxxxxxxxxxxxxxxxx=='
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=$CEPH_KUBE_SECRET \
--namespace=xxx-system
# 查看secret
kubectl get secret ceph-user-secret -nxxx-system -o yaml
kubectl get secret ceph-secret -nkube-system -o yaml
# 配置StorageClass
cat storageclass-ceph-rdb.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: xxx-ceph-rdb
provisioner: kubernetes.io/rbd
parameters:
monitors: xxx.xxx.xxx.xxx:6789,xxx.xxx.xxx.xxx:6789,xxx.xxx.xxx.xxx:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-user-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
# 创建
kubectl apply -f storageclass-ceph-rdb.yaml
# 查看
kubectl get sc
使用cephFS存储
部署cephfs-provisioner
# 官方没有cephfs动态卷支持
# 使用社区提供的cephfs-provisioner
cat external-storage-cephfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: xxx-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: xxx-system
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: xxx-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: xxx-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: xxx-system
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "quay.io/external_storage/cephfs-provisioner:v2.0.0-k8s1.11"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner
kubectl apply -f external-storage-cephfs-provisioner.yaml
# 查看状态,等待running之后,再进行后续的操作
kubectl get pod -n kube-system
配置storageclass
more storageclass-cephfs.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: xxx-cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: xxx.xxx.xxx.xxx:6789,xxx.xxx.xxx.xxx:6789,xxx.xxx.xxx.xxx:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
claimRoot: /volumes/kubernetes
# 创建
kubectl apply -f storageclass-cephfs.yaml
# 查看
kubectl get sc
问题注意
- ceph-ansible部署ceph过程中可能会出现安装版本问题(centos8有出现),这个时候检查一下yum源,是不是对应的centos的版本,centos7就用ceph对应centos7的yum源,centos8就用ceph对应centos8的yum源
- 由于uat部署的ceph集群是2.15比较高的版本,用的内核版本比较高,所以对ceph-common部署机器的内核版本要些要求,需要注意一下。如果之后生产部署全部用的centos7,那内核版本问题就不需要担心
- 如果使用ansible部署,记得注意osd加入后,防火墙是否开启了。如果不希望防火墙开启记得关闭或者改一下ansible脚本