首页 > 其他分享 >ceph集群部署与使用(块设备)(3mon+3osd+3mgr)+1ceph_deploy

ceph集群部署与使用(块设备)(3mon+3osd+3mgr)+1ceph_deploy

时间:2022-10-17 20:05:33浏览次数:61  
标签:1ceph crush deploy -- 3mgr ceph ssd osd pool

3台ceph节点操作:

创建sudo用户,加入公钥,允许部署节点访问。
disable /etc/selinux/config
setenforce 0
yum install yum-plugin-priorities -y
时间同步:
chronyc makestep

部署节点操作:

yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
cat << EOM >>/etc/hosts
192.168.200.237 ceph-1
192.168.200.238 ceph-2
192.168.200.239 ceph-3
EOM
创建部署配置文件等使用的目录,所有操作在该目录下执行:
mkdir my-cluster
cd my-cluster
清理旧的ceph:
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

开始部署:

ceph-deploy --username fungaming new ceph-1 ceph-2 ceph-3
vi ceph.conf
public network = 192.168.200.0/24
osd pool default pg num = 128
osd pool default pgp num = 128
[mon]
mon allow pool delete = true
export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
安装ceph
ceph-deploy --username fungaming install ceph-1 ceph-2 ceph-3
安装mon
ceph-deploy --username fungaming mon create ceph-1 ceph-2 ceph-3
ceph-deploy --username fungaming mon create-initial
同步配置
ceph-deploy --username fungaming admin ceph-1 ceph-2 ceph-3
安装mgr
ceph-deploy --username fungaming mgr ceph-1 ceph-2 ceph-3
检查ceph osd使用的磁盘
ceph-deploy disk list ceph-1 ceph-2 ceph-3
擦净磁盘
ceph-deploy disk zap ceph-1 /dev/sdb
安装osd
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-1
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-2
ceph-deploy --username fungaming osd create --data /dev/sdb ceph-3
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-1
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-2
ceph-deploy --username fungaming osd create --data /dev/sdc ceph-3

#安装完成。

pg计算

根据官网计算方法http://ceph.com/pgcalc/ 
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
osd:6 relications: 3 pool: 2
pg=100, 接近128 , 所以pg_num 设为128
重新推送配置:
ceph-deploy --username fungaming --overwrite-conf config push ceph-1 ceph-2 ceph-3
推送后ceph节点重启操作:
systemctl restart ceph-mon.target

ceph节点操作:

ceph状态检查:

ceph health detail
ceph -s
检查mon法定人数状态
ceph quorum_status --format json-pretty

1. ceph CRUSH map 增加规则

手动处理规则方式:
创建root,host
ceph osd crush add-bucket root-nvme root
ceph osd crush add-bucket root-ssd root
ceph osd crush add-bucket host1-nvme host
ceph osd crush add-bucket host2-nvme host
ceph osd crush add-bucket host3-nvme host
ceph osd crush add-bucket host1-ssd host
ceph osd crush add-bucket host2-ssd host
ceph osd crush add-bucket host3-ssd host
把host加入root
ceph osd crush move host1-ssd root=root-ssd
ceph osd crush move host2-ssd root=root-ssd
ceph osd crush move host3-ssd root=root-ssd
ceph osd crush move host3-nvme root=root-nvme
ceph osd crush move host2-nvme root=root-nvme
ceph osd crush move host1-nvme root=root-nvme
把osd加入host
ceph osd crush move osd.0 host=host1-nvme
ceph osd crush move osd.1 host=host2-nvme
ceph osd crush move osd.2 host=host3-nvme
ceph osd crush move osd.3 host=host1-ssd
ceph osd crush move osd.4 host=host2-ssd
ceph osd crush move osd.5 host=host3-ssd
导出CRUSH map:
ceph osd getcrushmap -o crushmap.txt
crushtool -d crushmap.txt -o crushmap-decompile
修改规则vi crushmap-decompile:
# rules
rule nvme {
id 1
type replicated
min_size 1
max_size 10
step take root-nvme
step chooseleaf firstn 0 type host
step emit
}

rule ssd {
id 2
type replicated
min_size 1
max_size 10
step take root-ssd
step chooseleaf firstn 0 type host
step emit
}
导入CRUSH map:
crushtool -c crushmap-decompile -o crushmap-compiled
ceph osd setcrushmap -i crushmap-compiled
配置ceph.conf,让OSD启动不更新crushmap
[osd]
osd crush update on start = false
自动处理规则方式(磁盘智能分组):

ceph中的每个osd设备都可以选择一个class类型与之关联,默认情况下,在创建osd的时候会自动识别设备类型,然后设置该设备为相应的类。通常有三种class类型:hdd,ssd,nvme。

###手动修改class标签
#查看当前集群布局
ceph osd tree
#查看crush class
ceph osd crush class ls
#删除osd.0,osd.1,osd.2的class
for i in 0 1 2;do ceph osd crush rm-device-class osd.$i;done
#设置osd.0,osd.1,osd.2的class为nvme
for i in 0 1 2;do ceph osd crush set-device-class nvme osd.$i;done
#创建一个优先使用nvme设备的crush规则
ceph osd crush rule create-replicated rule-auto-nvme default host nvme
#创建一个优先使用ssd设备的crush规则
ceph osd crush rule create-replicated rule-auto-ssd default host ssd
#查看集群的rule
ceph osd crush rule ls

2. 创建一个新的pool

#创建
ceph osd pool create pool-ssd 128
#创建并设置规则:
ceph osd pool create pool-ssd 128 128 rule-auto-ssd
# 设置pool的类型
ceph osd pool application enable pool-ssd rbd

osd pool 状态检查

#查看与设置pg num
ceph osd pool get hdd_pool pg_num
ceph osd pool get hdd_pool pgp_num
ceph osd pool set rbd pg_num 256
ceph osd pool set rbd pgp_num 256
#ods与pool状态
ceph osd tree
rados df
ceph df
ceph osd lspools

#查看crush_rule及pool的rule
ceph osd crush rule ls
ceph osd pool get pool-ssd crush_rule
#查看pool详情
ceph osd pool ls detail

3. 创建对象测试pool

rados -p pool-ssd ls
echo "hahah" >test.txt
rados -p pool-ssd put test test.txt
rados -p pool-ssd ls
#查看该对象的osd组
ceph osd map pool-ssd test
#删除
rados rm -p pool-ssd test

其它

#删除pool
ceph osd pool delete pool-ssd pool-ssd --yes-i-really-really-mean-it
#设置 pool的crush_rule
ceph osd pool set pool-ssd crush_rule rule-ssd

客户端使用块设备配置

给需要使用ceph存储的客户端部署ceph

部署节点操作:

export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
ceph-deploy install ceph-client
ceph-deploy admin ceph-client

ceph客户端操作:

#创建一个块设备image
rbd create pool-ssd/foo --size 1024 --image-feature layering
#把 image 映射为块设备
rbd map pool-ssd/foo --name client.admin

mkfs.xfs /dev/rbd0
mount /dev/rbd0 /mnt/rbdtest/

==块设备查询==

#ceph客户端操作
镜像信息
rbd ls pool-ssd
rbd info pool-ssd/foo
查看已映射块设备
rbd showmapped
取消块设备映射
rbd unmap /dev/rbd0
删除块设备映像
rbd rm pool-ssd/foo


标签:1ceph,crush,deploy,--,3mgr,ceph,ssd,osd,pool
From: https://blog.51cto.com/starsliao/5764138

相关文章