本次Ceph配置环境如下:
hostname | ip | roles |
node01.srv.world | 192.168.10.101 | Object Storage;Monitor Daemon;Manager Daemon |
node02.srv.world | 192.168.10.102 | Object Storage |
node03.srv.world | 192.168.10.103 | Object Storage |
以上OS系统均使用Centos Stream 9,采用最小安装,sdb将作为Ceph专用磁盘,Selinux设置为disabled,hosts文件已经更新。
1、在节点node01为Ceph集群主机配置互信
[root@node01 ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@node01 ~]# vi ~/.ssh/config
[root@node01 ~]# cat ~/.ssh/config
Host node01
Hostname node01.srv.world
User root
Host node02
Hostname node02.srv.world
User root
Host node03
Hostname node03.srv.world
User root
[root@node01 ~]#
[root@node01 ~]# chmod 600 ~/.ssh/config
[root@node01 ~]# ssh-copy-id node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node01.srv.world (192.168.10.101)' can't be established.
ED25519 key fingerprint is SHA256:U3nSPH5e9wZk88aUyzbW8tTL5XdJDoyK7TgrC4cCTYI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.
[root@node01 ~]# ssh-copy-id node02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node02.srv.world (192.168.10.102)' can't be established.
ED25519 key fingerprint is SHA256:LLttxvU9c69QENqB+YiVaP7IHiWBEvXWlqdkf1tYp1I.
This host key is known by the following other names/addresses:
~/.ssh/known_hosts:1: 192.168.10.102
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node02'"
and check to make sure that only the key(s) you wanted were added.
[root@node01 ~]# ssh-copy-id node03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node03.srv.world (192.168.10.103)' can't be established.
ED25519 key fingerprint is SHA256:m1UlGDoYsJeQdPR0R79HN2i44TdYPCATEP2q8lXkq68.
This host key is known by the following other names/addresses:
~/.ssh/known_hosts:4: 192.168.10.103
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node03'"
and check to make sure that only the key(s) you wanted were added.
[root@node01 ~]#
2、在node01节点为集群各个节点安装Ceph
[root@node01 ~]# for NODE in node01 node02 node03
do
ssh $NODE "dnf -y install centos-release-ceph-reef epel-release; dnf -y install ceph"
done
CentOS Stream 9 - AppStream 1.0 MB/s | 18 MB 00:17
CentOS Stream 9 - Extras packages 3.2 kB/s | 14 kB 00:04
Last metadata expiration check: 0:00:01 ago on Tue 12 Sep 2023 12:22:44 PM CST.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
centos-release-ceph-reef noarch 1.0-1.el9s extras-common 7.3 k
epel-release noarch 9-7.el9 extras-common 19 k
Installing dependencies:
centos-release-storage-common noarch 2-5.el9s extras-common 8.3 k
Installing weak dependencies:
epel-next-release noarch 9-7.el9 extras-common 8.1 k
Transaction Summary
================================================================================
Install 4 Packages
Total download size: 42 k
Installed size: 32 k
Downloading Packages:
(1/4): epel-next-release-9-7.el9.noarch.rpm 1.6 kB/s | 8.1 kB 00:05
(2/4): centos-release-storage-common-2-5.el9s.n 1.6 kB/s | 8.3 kB 00:05
(3/4): epel-release-9-7.el9.noarch.rpm 273 kB/s | 19 kB 00:00
(4/4): centos-release-ceph-reef-1.0-1.el9s.noar 1.4 kB/s | 7.3 kB 00:05
--------------------------------------------------------------------------------
Total 5.7 kB/s | 42 kB 00:07
CentOS Stream 9 - Extras packages 184 kB/s | 2.1 kB 00:00
Importing GPG key 0x1D997668:
Userid : "CentOS Extras SIG (https://wiki.centos.org/SpecialInterestGroup) <[email protected]>"
Fingerprint: 363F C097 2F64 B699 AED3 968E 1FF6 A217 1D99 7668
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Extras-SHA512
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : epel-release-9-7.el9.noarch 1/4
Running scriptlet: epel-release-9-7.el9.noarch 1/4
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.
Installing : epel-next-release-9-7.el9.noarch 2/4
Installing : centos-release-storage-common-2-5.el9s.noarch 3/4
Installing : centos-release-ceph-reef-1.0-1.el9s.noarch 4/4
Running scriptlet: centos-release-ceph-reef-1.0-1.el9s.noarch 4/4
Verifying : centos-release-ceph-reef-1.0-1.el9s.noarch 1/4
Verifying : centos-release-storage-common-2-5.el9s.noarch 2/4
Verifying : epel-next-release-9-7.el9.noarch 3/4
Verifying : epel-release-9-7.el9.noarch 4/4
Installed:
centos-release-ceph-reef-1.0-1.el9s.noarch
centos-release-storage-common-2-5.el9s.noarch
epel-next-release-9-7.el9.noarch
epel-release-9-7.el9.noarch
Complete!
3、在node01配置 [Monitor Daemon], [Manager Daemon]
[root@node01 ~]# uuidgen
1293692e-ff54-43d7-a6b2-f96d82d2a6ac
创建Ceph配置文件
[root@node01 ~]# vi /etc/ceph/ceph.conf
[root@node01 ~]# cat /etc/ceph/ceph.conf
[global]
# specify cluster network for monitoring
cluster network = 192.168.10.0/24
# specify public network
public network = 192.168.10.0/24
# specify UUID genarated above
fsid = 1293692e-ff54-43d7-a6b2-f96d82d2a6ac
# specify IP address of Monitor Daemon
mon host = 192.168.10.101
# specify Hostname of Monitor Daemon
mon initial members = node01
osd pool default crush rule = -1
# mon.(Node name)
[mon.node01]
# specify Hostname of Monitor Daemon
host = node01
# specify IP address of Monitor Daemon
mon addr = 192.168.10.101
# allow to delete pools
mon allow pool delete = true
1)为cluster monitoring生成所需的安全秘钥
[root@node01 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /etc/ceph/ceph.mon.keyring
2)为cluser admin生成秘钥
[root@node01 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creating /etc/ceph/ceph.client.admin.keyring
3)为bootstrap生成秘钥
[root@node01 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
4)导入秘钥
[root@node01 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@node01 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
5)生成监视器映射
[root@node01 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
setting min_mon_release = pacific
monmaptool: set fsid to 1293692e-ff54-43d7-a6b2-f96d82d2a6ac
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
6)为Monitor Daemon创建工作目录
[root@node01 ~]# mkdir /var/lib/ceph/mon/ceph-node01
7)将密钥和 monmap 关联到监视器守护进程
[root@node01 ~]# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[root@node01 ~]# chown ceph:ceph /etc/ceph/ceph.*
[root@node01 ~]# chown -R ceph:ceph /var/lib/ceph/mon/ceph-node01 /var/lib/ceph/bootstrap-osd
[root@node01 ~]# systemctl enable --now ceph-mon@$NODENAME
Created symlink /etc/systemd/system/ceph-mon.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].
[root@node01 ~]# systemctl status [email protected]
● [email protected] - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; preset>
Active: active (running) since Tue 2023-09-12 13:10:19 CST; 31s ago
Main PID: 13386 (ceph-mon)
Tasks: 24
Memory: 14.7M
CPU: 176ms
CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
└─13386 /usr/bin/ceph-mon -f --cluster ceph --id node01 --setuser >
Sep 12 13:10:19 node01.srv.world systemd[1]: Started Ceph cluster monitor daemo>
Sep 12 13:10:19 node01.srv.world ceph-mon[13386]: 2023-09-12T13:10:19.299+0800 >
Sep 12 13:10:19 node01.srv.world ceph-mon[13386]: continuing with monm>
8)启用Messenger v2 协议以及Placement Groups auto scale模块
[root@node01 ~]# ceph mon enable-msgr2
[root@node01 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@node01 ~]# ceph mgr module enable pg_autoscaler
module 'pg_autoscaler' is already enabled (always-on)
9)为Manager Daemon创建工作目录
[root@node01 ~]# mkdir /var/lib/ceph/mgr/ceph-node01
10)为Manager Daemon创建认证秘钥
[root@node01 ~]# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.node01]
key = AQBU9P9kpbcnHBAAkZjWL/N2o2vLaqrjVR+8Zw==
[root@node01 ~]# ceph auth get-or-create mgr.node01 > /etc/ceph/ceph.mgr.admin.keyring
[root@node01 ~]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-node01/keyring
[root@node01 ~]# chown ceph:ceph /etc/ceph/ceph.mgr.admin.keyring
[root@node01 ~]# chown -R ceph:ceph /var/lib/ceph/mgr/ceph-node01
[root@node01 ~]# systemctl enable --now ceph-mgr@$NODENAME
Created symlink /etc/systemd/system/ceph-mgr.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].
[root@node01 ~]# systemctl status [email protected]
● [email protected] - Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; preset>
Active: active (running) since Tue 2023-09-12 13:19:01 CST; 1min 27s ago
Main PID: 13610 (ceph-mgr)
Tasks: 85 (limit: 12134)
Memory: 416.0M
CPU: 9.523s
CGroup: /system.slice/system-ceph\x2dmgr.slice/[email protected]
└─13610 /usr/bin/ceph-mgr -f --cluster ceph --id node01 --setuser >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.229+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.338+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.538+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.625+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.811+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.001+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.282+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.372+0800 >
Sep 12 13:19:11 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:11.240+0800 >
Sep 12 13:19:11 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:11.240+0800 >
11)在node01节点如果Selinux启用,设置如下:
[root@node01 ~]# vi cephmon.te
[root@node01 ~]# cat cephmon.te
# create new
module cephmon 1.0;
require {
type ceph_t;
type ptmx_t;
type initrc_var_run_t;
type sudo_exec_t;
type chkpwd_exec_t;
type shadow_t;
class file { execute execute_no_trans lock getattr map open read };
class capability { audit_write sys_resource };
class process setrlimit;
class netlink_audit_socket { create nlmsg_relay };
class chr_file getattr;
}
#============= ceph_t ==============
allow ceph_t initrc_var_run_t:file { lock open read };
allow ceph_t self:capability { audit_write sys_resource };
allow ceph_t self:netlink_audit_socket { create nlmsg_relay };
allow ceph_t self:process setrlimit;
allow ceph_t sudo_exec_t:file { execute execute_no_trans open read map };
allow ceph_t ptmx_t:chr_file getattr;
allow ceph_t chkpwd_exec_t:file { execute execute_no_trans open read map };
allow ceph_t shadow_t:file { getattr open read };
[root@node01 ~]# checkmodule -m -M -o cephmon.mod cephmon.te
[root@node01 ~]# semodule_package --outfile cephmon.pp --module cephmon.mod
[root@node01 ~]# semodule -i cephmon.pp
[root@node01 ~]#
12)修改firewalld规则,确保ceph端口可以通信
[root@node01 ~]# firewall-cmd --add-service=ceph-mon
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
[root@node01 ~]#
13)确认集群状态
[root@node01 ~]# ceph -s
cluster:
id: 1293692e-ff54-43d7-a6b2-f96d82d2a6ac
health: HEALTH_WARN
1 mgr modules have recently crashed
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum node01 (age 17m)
mgr: node01(active, since 11m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[root@node01 ~]#
下一小结配置OSD,欢迎关注!
标签:ceph,node01,--,配置,Ceph,mgr,mon,root From: https://blog.51cto.com/capfzgs/7444977