首页 > 其他分享 >配置Ceph #1

配置Ceph #1

时间:2023-09-12 14:06:23浏览次数:49  
标签:ceph node01 -- 配置 Ceph mgr mon root

本次Ceph配置环境如下:

hostname

ip

roles

node01.srv.world

192.168.10.101

Object Storage;Monitor Daemon;Manager Daemon

node02.srv.world

192.168.10.102

Object Storage

node03.srv.world

192.168.10.103

Object Storage

以上OS系统均使用Centos Stream 9,采用最小安装,sdb将作为Ceph专用磁盘,Selinux设置为disabled,hosts文件已经更新。

1、在节点node01为Ceph集群主机配置互信

[root@node01 ~]#  ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@node01 ~]#  vi ~/.ssh/config
[root@node01 ~]# cat ~/.ssh/config
Host node01
    Hostname node01.srv.world
    User root
Host node02
    Hostname node02.srv.world
    User root
Host node03
    Hostname node03.srv.world
    User root
[root@node01 ~]#
[root@node01 ~]#  chmod 600 ~/.ssh/config
[root@node01 ~]# ssh-copy-id node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node01.srv.world (192.168.10.101)' can't be established.
ED25519 key fingerprint is SHA256:U3nSPH5e9wZk88aUyzbW8tTL5XdJDoyK7TgrC4cCTYI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.

[root@node01 ~]# ssh-copy-id node02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node02.srv.world (192.168.10.102)' can't be established.
ED25519 key fingerprint is SHA256:LLttxvU9c69QENqB+YiVaP7IHiWBEvXWlqdkf1tYp1I.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: 192.168.10.102
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node02'"
and check to make sure that only the key(s) you wanted were added.

[root@node01 ~]# ssh-copy-id node03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node03.srv.world (192.168.10.103)' can't be established.
ED25519 key fingerprint is SHA256:m1UlGDoYsJeQdPR0R79HN2i44TdYPCATEP2q8lXkq68.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:4: 192.168.10.103
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node03'"
and check to make sure that only the key(s) you wanted were added.

[root@node01 ~]#

2、在node01节点为集群各个节点安装Ceph

[root@node01 ~]# for NODE in node01 node02 node03
do
    ssh $NODE "dnf -y install centos-release-ceph-reef epel-release; dnf -y install ceph"
done

CentOS Stream 9 - AppStream                     1.0 MB/s |  18 MB     00:17
CentOS Stream 9 - Extras packages               3.2 kB/s |  14 kB     00:04
Last metadata expiration check: 0:00:01 ago on Tue 12 Sep 2023 12:22:44 PM CST.
Dependencies resolved.
================================================================================
 Package                         Arch     Version         Repository       Size
================================================================================
Installing:
 centos-release-ceph-reef        noarch   1.0-1.el9s      extras-common   7.3 k
 epel-release                    noarch   9-7.el9         extras-common    19 k
Installing dependencies:
 centos-release-storage-common   noarch   2-5.el9s        extras-common   8.3 k
Installing weak dependencies:
 epel-next-release               noarch   9-7.el9         extras-common   8.1 k

Transaction Summary
================================================================================
Install  4 Packages

Total download size: 42 k
Installed size: 32 k
Downloading Packages:
(1/4): epel-next-release-9-7.el9.noarch.rpm     1.6 kB/s | 8.1 kB     00:05
(2/4): centos-release-storage-common-2-5.el9s.n 1.6 kB/s | 8.3 kB     00:05
(3/4): epel-release-9-7.el9.noarch.rpm          273 kB/s |  19 kB     00:00
(4/4): centos-release-ceph-reef-1.0-1.el9s.noar 1.4 kB/s | 7.3 kB     00:05
--------------------------------------------------------------------------------
Total                                           5.7 kB/s |  42 kB     00:07
CentOS Stream 9 - Extras packages               184 kB/s | 2.1 kB     00:00
Importing GPG key 0x1D997668:
 Userid     : "CentOS Extras SIG (https://wiki.centos.org/SpecialInterestGroup) <[email protected]>"
 Fingerprint: 363F C097 2F64 B699 AED3 968E 1FF6 A217 1D99 7668
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Extras-SHA512
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : epel-release-9-7.el9.noarch                            1/4
  Running scriptlet: epel-release-9-7.el9.noarch                            1/4
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.

  Installing       : epel-next-release-9-7.el9.noarch                       2/4
  Installing       : centos-release-storage-common-2-5.el9s.noarch          3/4
  Installing       : centos-release-ceph-reef-1.0-1.el9s.noarch             4/4
  Running scriptlet: centos-release-ceph-reef-1.0-1.el9s.noarch             4/4
  Verifying        : centos-release-ceph-reef-1.0-1.el9s.noarch             1/4
  Verifying        : centos-release-storage-common-2-5.el9s.noarch          2/4
  Verifying        : epel-next-release-9-7.el9.noarch                       3/4
  Verifying        : epel-release-9-7.el9.noarch                            4/4

Installed:
  centos-release-ceph-reef-1.0-1.el9s.noarch
  centos-release-storage-common-2-5.el9s.noarch
  epel-next-release-9-7.el9.noarch
  epel-release-9-7.el9.noarch

Complete!

3、在node01配置 [Monitor Daemon], [Manager Daemon]

[root@node01 ~]# uuidgen
1293692e-ff54-43d7-a6b2-f96d82d2a6ac

创建Ceph配置文件

[root@node01 ~]# vi /etc/ceph/ceph.conf
[root@node01 ~]# cat /etc/ceph/ceph.conf
[global]
# specify cluster network for monitoring
cluster network = 192.168.10.0/24
# specify public network
public network = 192.168.10.0/24
# specify UUID genarated above
fsid = 1293692e-ff54-43d7-a6b2-f96d82d2a6ac
# specify IP address of Monitor Daemon
mon host = 192.168.10.101
# specify Hostname of Monitor Daemon
mon initial members = node01
osd pool default crush rule = -1

# mon.(Node name)
[mon.node01]
# specify Hostname of Monitor Daemon
host = node01
# specify IP address of Monitor Daemon
mon addr = 192.168.10.101
# allow to delete pools
mon allow pool delete = true

1)为cluster monitoring生成所需的安全秘钥

[root@node01 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /etc/ceph/ceph.mon.keyring

2)为cluser admin生成秘钥

[root@node01 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creating /etc/ceph/ceph.client.admin.keyring

3)为bootstrap生成秘钥

[root@node01 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creating /var/lib/ceph/bootstrap-osd/ceph.keyring

4)导入秘钥

[root@node01 ~]# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@node01 ~]#  ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring

5)生成监视器映射

[root@node01 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'})
[root@node01 ~]# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
setting min_mon_release = pacific
monmaptool: set fsid to 1293692e-ff54-43d7-a6b2-f96d82d2a6ac
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)

6)为Monitor Daemon创建工作目录 [root@node01 ~]# mkdir /var/lib/ceph/mon/ceph-node01 7)将密钥和 monmap 关联到监视器守护进程

[root@node01 ~]# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[root@node01 ~]# chown ceph:ceph /etc/ceph/ceph.*
[root@node01 ~]# chown -R ceph:ceph /var/lib/ceph/mon/ceph-node01 /var/lib/ceph/bootstrap-osd
[root@node01 ~]# systemctl enable --now ceph-mon@$NODENAME
Created symlink /etc/systemd/system/ceph-mon.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].
[root@node01 ~]# systemctl  status [email protected][email protected] - Ceph cluster monitor daemon
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; preset>
     Active: active (running) since Tue 2023-09-12 13:10:19 CST; 31s ago
   Main PID: 13386 (ceph-mon)
      Tasks: 24
     Memory: 14.7M
        CPU: 176ms
     CGroup: /system.slice/system-ceph\x2dmon.slice/[email protected]
             └─13386 /usr/bin/ceph-mon -f --cluster ceph --id node01 --setuser >

Sep 12 13:10:19 node01.srv.world systemd[1]: Started Ceph cluster monitor daemo>
Sep 12 13:10:19 node01.srv.world ceph-mon[13386]: 2023-09-12T13:10:19.299+0800 >
Sep 12 13:10:19 node01.srv.world ceph-mon[13386]:          continuing with monm>

8)启用Messenger v2 协议以及Placement Groups auto scale模块

[root@node01 ~]# ceph mon enable-msgr2
[root@node01 ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@node01 ~]# ceph mgr module enable pg_autoscaler
module 'pg_autoscaler' is already enabled (always-on)

9)为Manager Daemon创建工作目录

[root@node01 ~]# mkdir /var/lib/ceph/mgr/ceph-node01

10)为Manager Daemon创建认证秘钥

[root@node01 ~]# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.node01]
        key = AQBU9P9kpbcnHBAAkZjWL/N2o2vLaqrjVR+8Zw==
[root@node01 ~]#  ceph auth get-or-create mgr.node01 > /etc/ceph/ceph.mgr.admin.keyring
[root@node01 ~]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-node01/keyring
[root@node01 ~]# chown ceph:ceph /etc/ceph/ceph.mgr.admin.keyring
[root@node01 ~]#  chown -R ceph:ceph /var/lib/ceph/mgr/ceph-node01
[root@node01 ~]#  systemctl enable --now ceph-mgr@$NODENAME
Created symlink /etc/systemd/system/ceph-mgr.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].
[root@node01 ~]# systemctl  status [email protected][email protected] - Ceph cluster manager daemon
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; preset>
     Active: active (running) since Tue 2023-09-12 13:19:01 CST; 1min 27s ago
   Main PID: 13610 (ceph-mgr)
      Tasks: 85 (limit: 12134)
     Memory: 416.0M
        CPU: 9.523s
     CGroup: /system.slice/system-ceph\x2dmgr.slice/[email protected]
             └─13610 /usr/bin/ceph-mgr -f --cluster ceph --id node01 --setuser >

Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.229+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.338+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.538+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.625+0800 >
Sep 12 13:19:09 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:09.811+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.001+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.282+0800 >
Sep 12 13:19:10 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:10.372+0800 >
Sep 12 13:19:11 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:11.240+0800 >
Sep 12 13:19:11 node01.srv.world ceph-mgr[13610]: 2023-09-12T13:19:11.240+0800 >

11)在node01节点如果Selinux启用,设置如下:

[root@node01 ~]#  vi cephmon.te
[root@node01 ~]# cat cephmon.te
 # create new

module cephmon 1.0;

require {
        type ceph_t;
        type ptmx_t;
        type initrc_var_run_t;
        type sudo_exec_t;
        type chkpwd_exec_t;
        type shadow_t;
        class file { execute execute_no_trans lock getattr map open read };
        class capability { audit_write sys_resource };
        class process setrlimit;
        class netlink_audit_socket { create nlmsg_relay };
        class chr_file getattr;
}

#============= ceph_t ==============
allow ceph_t initrc_var_run_t:file { lock open read };
allow ceph_t self:capability { audit_write sys_resource };
allow ceph_t self:netlink_audit_socket { create nlmsg_relay };
allow ceph_t self:process setrlimit;
allow ceph_t sudo_exec_t:file { execute execute_no_trans open read map };
allow ceph_t ptmx_t:chr_file getattr;
allow ceph_t chkpwd_exec_t:file { execute execute_no_trans open read map };
allow ceph_t shadow_t:file { getattr open read };


[root@node01 ~]# checkmodule -m -M -o cephmon.mod cephmon.te
[root@node01 ~]# semodule_package --outfile cephmon.pp --module cephmon.mod
[root@node01 ~]# semodule -i cephmon.pp
[root@node01 ~]#

12)修改firewalld规则,确保ceph端口可以通信

[root@node01 ~]# firewall-cmd --add-service=ceph-mon
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
[root@node01 ~]#

13)确认集群状态

[root@node01 ~]# ceph -s
  cluster:
    id:     1293692e-ff54-43d7-a6b2-f96d82d2a6ac
    health: HEALTH_WARN
            1 mgr modules have recently crashed
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum node01 (age 17m)
    mgr: node01(active, since 11m)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

[root@node01 ~]#

下一小结配置OSD,欢迎关注!

标签:ceph,node01,--,配置,Ceph,mgr,mon,root
From: https://blog.51cto.com/capfzgs/7444977

相关文章

  • 十一、Nginx大文件传输配置
    某些业务场景中需要传输一些大文件,但大文件传输时往往都会会出现一些Bug,比如文件超出限制、文件传输过程中请求超时等,那么此时就可以在Nginx稍微做一些配置,先来了解一些关于大文件传输时可能会用的配置项:在传输大文件时,client_max_body_size、client_header_timeout、proxy_read_ti......
  • 十二、Nginx配置SSL证书
    随着越来越多的网站接入HTTPS,因此Nginx中仅配置HTTP还不够,往往还需要监听443端口的请求,HTTPS为了确保通信安全,所以服务端需配置对应的数字证书,当项目使用Nginx作为网关时,那么证书在Nginx中也需要配置,接下来简单聊一下关于SSL证书配置过程:①先去CA机构或从云控制台中申请对应的SSL证......
  • Kubernetes 集群的优化 节点配额和内核参数调整 自动增加etcd节点 Kube APIServer 配
    一、节点配额和内核参数调整对于公有云上的Kubernetes集群,规模大了之后很容器碰到配额问题,需要提前在云平台上增大配额。这些需要增大的配额包括:虚拟机个数vCPU个数内网IP地址个数公网IP地址个数安全组条数路由表条数持久化存储大小参考gce随着node节点的增加master节点的配......
  • ORACLE Enterprise Manager Database Express(OEM-express)配置端口和启动方法
    1.问题之前一直进不去ORACLEEnterpriseManagerDatabaseExpress,显示的是localhost拒绝了访问,经过查阅知道是没有配置相应端口。2.解决方法转载自:https://blog.csdn.net/wshjx0001/article/details/1224660151.首先查看监听状态,如果监听没有启动需要先启动监听2.在SQLpl......
  • Padavan配置白名单模式及上网时间控制
    登录Padavan管理后台,高级设置--->防火墙--->mac访问控制--->mac访问控制模式【允许模式---仅列表中的设备可获取网络;拒绝模式---列表中的设备拒绝访问网络】,禁止访问路由器主机这项一定打开,不然试了下没效果,开了就是未在列表中的设备不能访问路由器,初次连接的设备也无法获取ip地......
  • kubernetes部署mongoDB 单机版 自定义配置文件、密码、日志路径等
    来源:https://aijishu.com/a/1060000000097166官方镜像地址: https://hub.docker.com/_/mong...docker版的mongo移除了默认的/etc/mongo.conf,修改了db数据存储路径为/data/db.创建configmap配置,注意不能加fork=true,否则Pod会变成Completed。apiVersion:v1kind:ConfigMap......
  • ubuntu22.04服务器的双网卡绑定的具体操作步骤和配置文件
    前言ubuntu22.04服务器的双网卡绑定具体步骤可以分成以下五步,下面按步骤操作一、安装必要软件sudoaptinstallnet-tools二、编辑/etc/netplan/01-network-manager-all.yaml将原内容修改为配置文件内容这种方式为自动获取IP的方式,系统将通过连接的DHCP服务器自动获取其IP地址,其......
  • Ubuntu Server 22.04 双网卡绑定 配置文件 Bond mode 1 : active-backup 主备模式
    UbuntuServer22.041.拓扑视图实例 2.备份配置文件修改前备份root@ax:~#cpetc/netplan/00-installer-config.yamletc/netplan/00-installer-config.yaml.orig修改配置文件,Ubuntu严格区分格式,空格缩进。简要说明:eno1-eno4,关闭dhcp;bond0只绑定eno1、eno2,实际可根据情况,绑定更多......
  • Redis从入门到放弃(1):安装配置
    Redis从入门到放弃(1):安装配置 1.介绍Redis是一个高性能的开源key-value数据库。它被广泛应用于缓存、会话存储、实时分析、消息队列等场景。Redis具有以下三个主要特点:数据持久化:Redis支持将内存中的数据保存到磁盘上,确保数据在断电或重启后不丢失。多样数据结构:除了支持......
  • Nginx配置文件详细说明
    在此记录下Nginx服务器nginx.conf的配置文件说明,部分注释收集与网络.#运行用户userwww-data;#启动进程,通常设置成和cpu的数量相等worker_processes1;#全局错误日志及PID文件error_log/var/log/nginx/error.log;pid/var/run/nginx.pid;#工作模式及连接数上......