规划
主机名 | IP地址 | 系统 | ceph版本 | ceph硬盘 | 大小 | 组件 | 规划 |
---|---|---|---|---|---|---|---|
master | 192.168.1.60 | CentOS7.9 | ceph-15.2.10 | sdb | 100G | OSD、MOD、MDS、MGR | 主节点 |
node01 | 192.168.1.70 | CentOS7.9 | ceph-15.2.10 | sdb | 100G | OSD | 从节点 |
node02 | 192.168.1.80 | CentOS7.9 | ceph-15.2.10 | sdb | 100G | OSD | 从节点 |
[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@master ~]# uname -a
Linux master 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
准备环境
三个节点添加硬盘
三个节点配置hosts及免密登录
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.60 master
192.168.1.70 node01
192.168.1.80 node02
[root@master ~]# ssh node01
Last login: Mon Aug 7 11:41:28 2023 from master
[root@node01 ~]# 登出
Connection to node01 closed.
[root@master ~]# ssh node02
Last login: Mon Aug 7 11:41:31 2023 from master
[root@node02 ~]# 登出
Connection to node02 closed.
添加合适的ceph安装源
找到要安装的版本
这里选择 rpm-nautilus 14.2.22
版本
由于系统版本是centos7,安装15.x.xx版本会有问题,安装dashboard时会有python3的依赖问题,建议centos8以上的部署15or15+版本
# yum install ceph-mgr-dashboard –y
......(省略内容)
--> Processing Dependency: python3-routes for package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch
--> Processing Dependency: python3-cherrypy for package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch
--> Processing Dependency: python3-jwt for package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch
---> Package ceph-prometheus-alerts.noarch 2:15.2.15-0.el7 will be installed
---> Package python36-werkzeug.noarch 0:1.0.1-1.el7 will be installed
--> Finished Dependency Resolution
Error: Package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch (Ceph-noarch)
Requires: python3-jwt
Error: Package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch (Ceph-noarch)
Requires: python3-routes
Error: Package: 2:ceph-mgr-dashboard-15.2.15-0.el7.noarch (Ceph-noarch)
Requires: python3-cherrypy
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
这是由于从O版本开始,MGR改为Python3编写,而默认库没有这3个模块包,即使单独找包安装也可能不生效或者安装不上。从社区得知,这是已知问
题,建议使用CentOS8系统或者使用cephadm容器化部署Ceph,或者降低Ceph版本也可以,例如H版本,这个版本还是Python2编写的,不存在缺包
问题。
配置yum源
[root@master ~]# cat /etc/yum.repos.d/ceph.repo
[rpm-nautilus_x86_64]
name=rpm-14-2-22_x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
enabled=1
[rpm-nautilus-noarch]
name=rpm-14-2-22-noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
enabled=1
[root@master ~]# scp /etc/yum.repos.d/ceph.repo node01:/etc/yum.repos.d/
ceph.repo 100% 264 479.5KB/s 00:00
[root@master ~]# scp /etc/yum.repos.d/ceph.repo node02:/etc/yum.repos.d/
ceph.repo 100% 264 420.9KB/s 00:00
# 三节点都设置
# echo "vm.swappiness = 10" >> /etc/sysctl.conf
# sysctl -p
安装相关软件
# master节点安装
yum install -y ceph-deploy ceph ceph-radosgw ceph-mgr-dashboard
# node节点安装
yum install -y ceph ceph-radosgw ceph-mgr-dashboard
初始化ceph集群
[root@master ~]# cd /etc/ceph/
[root@master ceph]# ls
rbdmap
# 创建ceph集群
[root@master ceph]# ceph-deploy new master node01 node02
# 使用ceph-deploy new命令创建一个集群。在/etc/ceph目录下,生成的配置文件:ceph.conf、ceph-deploy-ceph.log、ceph.mon.keyring等文件
[root@master ceph]# ls
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring rbdmap
修改ceph.conf 在global下添加最后四行
[root@master ceph]# cat ceph.conf
[global]
fsid = 7b0d03e0-3777-4964-bece-e5804e1db133
mon_initial_members = master, node01, node02
mon_host = 192.168.1.60,192.168.1.70,192.168.1.80
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
mon clock drift allowed = 2
mon clock drift warn backoff = 30
mon_pg_warn_max_per_osd = 500
mon_max_pg_per_osd = 500
创建并初始化mon模块
[root@master ceph]# ceph-deploy mon create-initial
在ceph集群中创建并初始mon,创建Ceph的mon守护进程管理器。
在主节点/etc/ceph下生成并写配置文件:ceph.bootstrap-mds.keyring、ceph.bootstrap-mgr.keyring、ceph.bootstrap-osd.keyring、ceph.bootstrap-rgw.keyring、ceph.client.admin.keyring、ceph.conf、ceph-deploy-ceph.log、ceph.mon.keyring、rbdmap。
在从节点/etc/ceph下生成并写配置文件:ceph.conf、rbdmap、tmpHTFcfI
查看集群状态
禁用不安全模式
cluster:
id: 7b0d03e0-3777-4964-bece-e5804e1db133
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
执行:
ceph config set mon auth_allow_insecure_global_id_reclaim false
需要等待5-10秒
再次查看集群状态
[root@master ceph]# ceph -s
cluster:
id: 7b0d03e0-3777-4964-bece-e5804e1db133
health: HEALTH_OK
services:
mon: 3 daemons, quorum master,node01,node02 (age 35m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
创建ceph的osd模块
确认每个节点挂载的磁盘
格式化每个节点挂载磁盘
ceph-deploy disk zap master /dev/sdb
ceph-deploy disk zap node01 /dev/sdb
ceph-deploy disk zap node02 /dev/sdb
创建osd并挂载到磁盘
将每个主机的/dev/sdb磁盘挂载为osd盘。
ceph-deploy osd create --data /dev/sdb master
ceph-deploy osd create --data /dev/sdb node01
ceph-deploy osd create --data /dev/sdb node02
查看集群状态
查看osd信息
[root@master ceph]# ceph --cluster=ceph osd stat --format=json
{"osdmap":{"epoch":13,"num_osds":3,"num_up_osds":3,"num_in_osds":3,"num_remapped_pgs":0}}
创建ceph的mgr模块
ceph-deploy mgr create master node01 node02
查看集群状态
配置dashboard
开启dashboard功能
ceph mgr module enable dashboard
默认情况下,仪表板的所有http连接使用SSL/TLS进行保护,要快速启动并运行仪表盘,可以使用以下内置命令生成并安装自签名证书
# ceph dashboard create-self-signed-cert # 生成证书,这里禁用了ssl所以不执行
[root@master ceph]# pwd
/etc/ceph
[root@master ceph]# echo Admin123 > ceph-password.txt # 设置密码文件
[root@master ceph]# ceph config set mgr mgr/dashboard/ssl false # 关闭HTTPS
[root@master ceph]# ceph config set mgr mgr/dashboard/server_addr 192.168.1.60 # 设置访问地址
[root@master ceph]# ceph config set mgr mgr/dashboard/server_port 8888 # 设置访问端口
[root@master ceph]# ceph dashboard set-login-credentials admin -i /etc/ceph/ceph-password.txt # 创建admin用户并设置密码
******************************************************************
*** WARNING: this command is deprecated. ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
[root@master ceph]# systemctl restart ceph-mgr.target
创建ceph的mds模块
[root@master ceph]# ceph-deploy mds create master node01 node02
解析:创建mds,使用cephfs文件系统服务时,需安装mds。作用:数据元服务。
查看集群状态
[root@master ceph]# ceph -s
cluster:
id: 7b0d03e0-3777-4964-bece-e5804e1db133
health: HEALTH_OK
services:
mon: 3 daemons, quorum master,node01,node02 (age 10m)
mgr: node01(active, since 3s), standbys: master, node02
mds: 3 up:standby
osd: 3 osds: 3 up (since 10m), 3 in (since 2h)
data:
pools: 4 pools, 128 pgs
objects: 14 objects, 1.6 KiB
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 128 active+clean
创建ceph的rgw模块
[root@master ceph]# ceph-deploy rgw create master node01 node02
解析:创建rgw,使用对象网关。
查看集群状态
[root@master ceph]# ceph -s
cluster:
id: 7b0d03e0-3777-4964-bece-e5804e1db133
health: HEALTH_OK
services:
mon: 3 daemons, quorum master,node01,node02 (age 12m)
mgr: node01(active, since 118s), standbys: master, node02
mds: 3 up:standby
osd: 3 osds: 3 up (since 12m), 3 in (since 2h)
rgw: 3 daemons active (master, node01, node02)
task status:
data:
pools: 4 pools, 128 pgs
objects: 189 objects, 1.6 KiB
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 128 active+clean
重启服务
systemctl restart ceph.target
systemctl restart ceph-mds.target
systemctl restart ceph-mgr.target
systemctl restart ceph-mon.target
systemctl restart ceph-osd.target
systemctl restart ceph-radosgw.target
标签:node02,nautilus,22,ceph,mgr,node01,master,root
From: https://www.cnblogs.com/chuyiwang/p/17611041.html