首页 > 其他分享 >ceph(五)CephFS部署、使用和MDS高可用实现

ceph(五)CephFS部署、使用和MDS高可用实现

时间:2023-09-26 15:55:07浏览次数:40  
标签:root 2023 0800 ceph mds MDS CephFS data

1. 部署cephfs服务

ceph FS即ceph filesystem,可以实现文件系统共享功能(POSIX标准),客户端通过ceph协议挂载并使用ceph集群作为数据存储服务器,https://docs.ceph.com/en/latest/cephfs/

cephFS需要运行Meta Data Services(MDS)服务,其守护进程为ceph-mds,ceph-mds进程管理与cephFS上存储的文件相关的元数据,并协调对ceph存储集群的访问。

在linux系统使用ls 等操作查看某个目录下的文件的时候,会有保存在磁盘上的分区表记录文件的名称、创建日期、大小、inode 及存储位置等元数据信息,在cephfs 由于数据是被打散为若干个离散的object进行分布式存储,因此并没有统一保存文件的元数据,而且将文件的元数据保存到一个单独的存储出matedata pool,但是客户端并不能直接访问matedata pool中的元数据信息,而是在读写数的时候有MDS(matadata server)进行处理,读数据的时候有MDS 从matedata pool加载元数据然后缓存在内存(用于后期快速响应其它客户端的请求)并返回给客户端,写数据的时候有MDS缓存在内存并同步到matedata pool。

cephfs 的 mds的数据结构类似于linux系统的根形目录结构及nginx中的缓存目录分层一样。


1.1 部署MDS服务

如果使用cephFS,需要部署MDS服务。

# 在ceph-mgr1节点安装ceph-mds
[root@ceph-mgr1 ~]#apt-cache madison ceph-mds
  ceph-mds | 16.2.14-1focal | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal/main amd64 Packages
[root@ceph-mgr1 ~]#apt install ceph-mds

# 验证ceph-mds版本
[root@ceph-mgr1 ~]#ceph-mds --version
ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)


# ceph集群在ceph-mgr1上创建mds服务
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy mds create ceph-mgr1

1.2 创建CephFS metadata和data存储池

使用cephFS之前需要事先于集群中创建一个文件系统,并为其分别制定元数据和数据相关的存储池。

# 创建保存元数据的cephfs-metadata存储池
cephadmin@ceph-deploy:~$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created

# 创建保存数据的cephfs-data存储池
cephadmin@ceph-deploy:~$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created

# ceph集群状态
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph -s
  cluster:
    id:     28820ae5-8747-4c53-827b-219361781ada
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 19h)
    mgr: ceph-mgr2(active, since 19h), standbys: ceph-mgr1
    mds: 1/1 daemons up
    osd: 20 osds: 20 up (since 17h), 20 in (since 3d)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 193 pgs
    objects: 99 objects, 43 MiB
    usage:   5.9 GiB used, 20 TiB / 20 TiB avail
    pgs:     193 active+clean

1.3 创建cephFS并验证

# 创建cephfs
cephadmin@ceph-deploy:~$ ceph fs new mycephfs cephfs-metadata cephfs-data
  Pool 'cephfs-data' (id '6') has pg autoscale mode 'on' but is not marked as bulk.
  Consider setting the flag by running
    # ceph osd pool set cephfs-data bulk true
new fs with metadata pool 5 and data pool 6

# 验证fs
cephadmin@ceph-deploy:~$ ceph fs ls
name: mycephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]

# 查看指定cephFS状态,cephfs-metadata存储池为元数据类型,cephfs-data为数据类型
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs status mycephfs
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr1  Reqs:    0 /s    10     13     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  96.0k  6483G  
  cephfs-data      data       0   6483G  
MDS version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)

1.4 验证cephFS服务状态

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active}		# mycephfs为活动状态

1.5 创建客户端账户

# 创建普通用户fs
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph auth get-or-create client.fs mon 'allow r' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
[client.fs]
	key = AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==

# 验证账户
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph auth get client.fs
[client.fs]
	key = AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.fs

# 创建keyring文件
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-authtool --create-keyring ceph.client.fs.keyring
creating ceph.client.fs.keyring
# 创建fs用户keyring文件
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph auth get client.fs -o ceph.client.fs.keyring 
exported keyring for client.fs

# 创建key文件
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph auth print-key client.fs > fs.key

# 验证keyring文件
cephadmin@ceph-deploy:/data/ceph-cluster$ cat ceph.client.fs.keyring 
[client.fs]
	key = AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==
	caps mds = "allow rw"
	caps mon = "allow r"
	caps osd = "allow rwx pool=cephfs-data"

1.6 安装ceph客户端

# Ubuntu20.04客户端:10.0.0.61
[root@ceph-client1 ~]#apt update
[root@ceph-client1 ~]#apt install -y ceph-common

1.7 同步客户端认证文件

cephadmin@ceph-deploy:/data/ceph-cluster$ scp ceph.conf ceph.client.fs.keyring fs.key [email protected]:/etc/ceph/
ceph.conf                                                                                                                                                                                                                                    100%  298   439.9KB/s   00:00  
ceph.client.fs.keyring                                                                                                                                                                                                                       100%  146   304.8KB/s   00:00  
fs.key

1.8 客户端验证权限

[root@ceph-client1 ~]#ceph --user fs -s
  cluster:
    id:     28820ae5-8747-4c53-827b-219361781ada
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 19h)
    mgr: ceph-mgr2(active, since 19h), standbys: ceph-mgr1
    mds: 1/1 daemons up
    osd: 20 osds: 20 up (since 17h), 20 in (since 3d)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 193 pgs
    objects: 99 objects, 43 MiB
    usage:   5.9 GiB used, 20 TiB / 20 TiB avail
    pgs:     193 active+clean

2. 使用普通用户挂载cephfs

客户端挂载有两种方式,一是内核空间,一是用户空间,内核空间挂载需要内核支持ceph模块,用户空间挂载需按照ceph-fuse,正常推荐使用内核挂载。

内核空间挂载分为secretfile文件和secret两种方式,同时支持多主机挂载。

2.1 secretfile文件形式两主机同时挂载

2.1.1 client1挂载

[root@ceph-client1 ~]#mkdir /data
[root@ceph-client1 ~]#mount -t ceph 10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ /data -o name=fs,secretfile=/etc/ceph/fs.key
[root@ceph-client1 ~]#df -Th
Filesystem                                     Type      Size  Used Avail Use% Mounted on
udev                                           devtmpfs  429M     0  429M   0% /dev
tmpfs                                          tmpfs      95M  1.3M   94M   2% /run
/dev/sda4                                      xfs        17G  4.2G   13G  25% /
...
tmpfs                                          tmpfs      95M     0   95M   0% /run/user/0
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ ceph      6.4T  200M  6.4T   1% /data

2.1.2 client2挂载

[root@ceph-client2 ~]#mkdir /data
[root@ceph-client2 ~]#mount -t ceph 10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ /data -o name=fs,secretfile=/etc/ceph/fs.key

[root@ceph-client2 ~]#df -Th
Filesystem                                     Type      Size  Used Avail Use% Mounted on
udev                                           devtmpfs  429M     0  429M   0% /dev
tmpfs                                          tmpfs      95M  1.3M   94M   2% /run
/dev/sda4                                      xfs        17G  4.8G   13G  28% /
...
tmpfs                                          tmpfs      95M     0   95M   0% /run/user/0
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ ceph      6.4T     0  6.4T   0% /data

2.1.3 验证两主机数据共享

client2数据写入,client1查看

# client2写入数据
[root@ceph-client2 ~]#cp /var/log/syslog /data/
[root@ceph-client2 ~]#dd if=/dev/zero of=/data/test-file bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.16818 s, 180 MB/s

# client1正常查看数据,并修改
[root@ceph-client1 ~]#ll /data/
total 205115
drwxr-xr-x  2 root root         3 Sep 24 19:32 ./
drwxr-xr-x 19 root root       275 Sep 25 01:11 ../
-rw-r--r--  1 root root      2055 Sep 24 19:32 passwd
-rw-r-----  1 root root    319825 Sep 24 19:28 syslog
-rw-r--r--  1 root root 209715200 Sep 24 19:29 test-file
# 修改文件名
[root@ceph-client1 ~]#mv /data/test-file /data/file-test

# client2查看显示原文件名已修改
[root@ceph-client2 ~]#ls /data/
file-test  passwd  syslog

2.2 secret形式两主机同时挂载

2.2.1 secret挂载

直接以secret文件内容(key)方式挂载

# 查看key内容
[root@ceph-client2 ~]#cat /etc/ceph/fs.key
AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==
# 取消原secretfile挂载
[root@ceph-client2 ~]#umount /data

# 以secret形式挂载
[root@ceph-client2 ~]#mount -t ceph 10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ /data -o name=fs,secret=AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==
# 挂载成功
[root@ceph-client2 ~]#df -Th
Filesystem                                     Type      Size  Used Avail Use% Mounted on
udev                                           devtmpfs  429M     0  429M   0% /dev
tmpfs                                          tmpfs      95M  1.4M   94M   2% /run
/dev/sda4                                      xfs        17G  4.8G   13G  28% /
...
tmpfs                                          tmpfs      95M     0   95M   0% /run/user/0
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ ceph      6.4T  200M  6.4T   1% /data

# client1方法同理,挂载成功
[root@ceph-client1 ~]#umount /data 
[root@ceph-client1 ~]#mount -t ceph 10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ /data -o name=fs,secret=AQA7FxBlcemZNxAASwhrm5863kLIj7naAtf6RA==
[root@ceph-client1 ~]#df -Th
Filesystem                                     Type      Size  Used Avail Use% Mounted on
udev                                           devtmpfs  429M     0  429M   0% /dev
tmpfs                                          tmpfs      95M  1.3M   94M   2% /run
/dev/sda4                                      xfs        17G  4.2G   13G  25% /
...
tmpfs                                          tmpfs      95M     0   95M   0% /run/user/0
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ ceph      6.4T  200M  6.4T   1% /data

2.2.2 验证数据挂载

# client2写入数据
[root@ceph-client2 ~]#cp /var/log/dmesg /data
# client1查看数据
[root@ceph-client1 ~]#ls /data
dmesg  file-test  passwd  syslog

# 查看挂载状态
[root@ceph-client1 ~]#stat -f /data
  File: "/data"
    ID: 762bf9aa89563fe6 Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 1659685    Free: 1659635    Available: 1659635
Inodes: Total: 52         Free: -1

[root@ceph-client2 ~]#stat -f /data
  File: "/data"
    ID: 762bf9aa89563fe6 Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 1659685    Free: 1659635    Available: 1659635
Inodes: Total: 52         Free: -1

2.3 设置自动开机挂载

[root@ceph-client2 ~]#cat /etc/fstab 
...
# 添加该行
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ /data /ceph defaults,name=fs,secretfile=/etc/ceph/fs.key,_netdev 0 0

# mount -a或reboot重启
[root@ceph-client2 ~]#mount -a		# reboot

# 查看ceph正常挂载
[root@ceph-client2 ~]#df -Th
Filesystem                                     Type      Size  Used Avail Use% Mounted on
udev                                           devtmpfs  429M     0  429M   0% /dev
tmpfs                                          tmpfs      95M  1.4M   94M   2% /run
/dev/sda4                                      xfs        17G  4.6G   13G  27% /
tmpfs                                          tmpfs     473M     0  473M   0% /dev/shm
tmpfs                                          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs                                          tmpfs     473M     0  473M   0% /sys/fs/cgroup
...
10.0.0.51:6789,10.0.0.52:6789,10.0.0.53:6789:/ ceph      6.4T  200M  6.4T   1% /data

2.4 客户端模块

客户端内核加载ceph.ko模块加载cephfs文件系统

lsmod | grep ceph
modinfo ceph

3. 实现MDS服务的多主一备高可用架构

https://docs.ceph.com/en/latest/cephfs/add-remove-mds/

ceph mds作为ceph的访问入口,需要实现高性能及数据备份,而MDS支持多MDS架构,甚至还能实现类似于redis cluster的多主从架构,以实现MDS服务的高性能和高可用,假设启动4个MDS进程,设置最大max_mds为2,这时候有2个MDS成为主节点,另外2个MDS作为备份节点。

设置每个主节点专用的备份MDS,也就是如果此主节点出现问题马上切换到另一个MDS接管主MDS并继续对外提供元数据读写,设置备份MDS的常用选项如下。

  • mds_standby_replay

    值为true或 false。

    true表示开启 replay模式,这种模式下主MDS内的数量将实时与从MDS同步,如果主MDS宕机,从MDS可以快速的切换。

    如果为 false只有宕机的时候才去同步数据,这样会有一段时间的中断。

  • mds_standby_for_name

    设置当前MDS进程只用于备份的指定名称的MDS。

  • mds_standby_for_rank

    设置当前MDS进程只用于备份于哪个Rank(上级节点),通常为Rank编号。另外在存在多个CephFS文件系统中,还可以使用mds_standby_for_fscid参数来为指定不同的文件系统.

  • mds_standby_for_fscid

    指定CephFS文件系统ID,需要联合mds_standby_for_rank生效,如果设置mds_standby_for_rank,那么就是用于指定文件系统的指定Rank,如果没有设置,就是指定文件系统的所有Rank。

3.1 添加mds服务器

查看当前mds服务器状态

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active}

将ceph-mgr2、ceph-mon2、ceph-mon3作为mds服务角色添加至ceph集群,实现两主两备的mds高可用和高性能架构

# 安装ceph-mds服务
[root@ceph-mgr2 ~]#apt install ceph-mds -y
[root@ceph-mon2 ~]#apt install ceph-mds -y
[root@ceph-mon3 ~]#apt install ceph-mds -y

# 集群添加mds服务器
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy mds create ceph-mgr2
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy mds create ceph-mon2
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy mds create ceph-mon3

验证mds服务器当前状态

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph mds stat
mycephfs:1 {0=ceph-mgr1=up:active} 3 up:standby		# 变为1主3从状态

3.2 验证ceph集群当前状态

当前状态处于激活的mds服务器有一台,处于备份状态的mds服务器有3台

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr1  Reqs:    0 /s    15     17     12      6   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   324k  6482G  
  cephfs-data      data     601M  6482G  
STANDBY MDS  	# 备份状态
 ceph-mon3   	
 ceph-mon2   
 ceph-mgr2   
MDS version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)

3.3 当前文件系统状态

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name	mycephfs
epoch	5
flags	12
created	2023-09-24T18:50:32.132506+0800
modified	2023-09-24T18:55:59.029243+0800
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	0
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds	1
in	0
up	{0=63826}
failed
damaged
stopped
data_pools	[6]
metadata_pool	5
inline_data	disabled
balancer
standby_count_wanted	1
[mds.ceph-mgr1{0:63826} state up:active seq 2 addr [v2:10.0.0.54:6800/1190806418,v1:10.0.0.54:6801/1190806418] compat {c=[1],r=[1],i=[7ff]}]

3.3 设置主mds的数量

目前有四个mds服务器,是一主三备状态,可以优化部署架构,设置为两主两备。

# 设置同时活跃的主mds最大值为2
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs set mycephfs max_mds 2

# 验证状态,ceph-mgr1、ceph-mon2为主,ceph-mgr2、ceph-mon3为备
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs status
mycephfs - 2 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr1  Reqs:    0 /s    15     17     12      2   
 1    active  ceph-mon2  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   408k  6482G  
  cephfs-data      data     601M  6482G  
STANDBY MDS  
 ceph-mon3   
 ceph-mgr2   
MDS version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)

3.4 mds高可用优化

目前状态是ceph-mgr1和ceph-mon2分别为active状态,ceph-mgr2和ceph-mon3分别处于standby状态,现可以将ceph-mgr2设置为ceph-mgr1的standby,将ceph-mon3设置为ceph-mon2的standby,以实现每个主都有一个固定备份角色的结构。

若四个mds都设置为主角色,能够提高文件系统的读写效率,但任意一节点的宕机都会导致节点mds的变化迁移引起数据读取的延迟。

# 修改配置文件
cephadmin@ceph-deploy:/data/ceph-cluster$ cat ceph.conf 
[global]
fsid = 28820ae5-8747-4c53-827b-219361781ada
public_network = 10.0.0.0/24
cluster_network = 192.168.10.0/24
mon_initial_members = ceph-mon1,ceph-mon2,ceph-mon3
mon_host = 10.0.0.51,10.0.0.52,10.0.0.53
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

# 新增如下信息
[mds.ceph-mgr2]
mds_standby_for_name = ceph-mgr1
mds_standby_replay = true

[mds.ceph-mgr1]
mds_standby_for_name = ceph-mgr2
mds_standby_replay = true

[mds.ceph-mon3]
mds_standby_for_name = ceph-mon2
mds_standby_replay = true

[mds.ceph-mon2]
mds_standby_for_name = ceph-mon3
mds_standby_replay = true

3.5 分发配置文件并重启mds服务

# 分发配置文件至各mds服务器,重启服务生效
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon2
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon3
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mgr1
cephadmin@ceph-deploy:/data/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mgr2

# 重新加载服务并重启mds服务,先重启备mds节点服务,再重启主节点服务,会产生主节点角色切换
[root@ceph-mgr2 ~]#systemctl daemon-reload 
[root@ceph-mgr2 ~]#systemctl restart [email protected]

[root@ceph-mon3 ~]#systemctl daemon-reload 
[root@ceph-mon3 ~]#systemctl restart [email protected]

[root@ceph-mgr1 ~]#systemctl daemon-reload 
[root@ceph-mgr1 ~]#systemctl restart [email protected]

[root@ceph-mon2 ~]#systemctl daemon-reload 
[root@ceph-mon2 ~]#systemctl restart [email protected]

3.6 ceph集群mds高可用状态

cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs status
mycephfs - 0 clients
========
RANK  STATE      MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mon3  Reqs:    0 /s    15     17     12      0   
 1    active  ceph-mgr2  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   408k  6482G  
  cephfs-data      data     601M  6482G  
STANDBY MDS  
 ceph-mgr1   
 ceph-mon2   
MDS version: ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable)


cephadmin@ceph-deploy:/data/ceph-cluster$ ceph fs get mycephfs
Filesystem 'mycephfs' (1)
fs_name	mycephfs
epoch	150
flags	12
created	2023-09-24T18:50:32.132506+0800
modified	2023-09-25T03:53:39.542747+0800
tableserver	0
root	0
session_timeout	60
session_autoclose	300
max_file_size	1099511627776
required_client_features	{}
last_failure	0
last_failure_osd_epoch	349
compat	compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=sna2}
max_mds	2
in	0,1
up	{0=64919,1=64933}
failed
damaged
stopped
data_pools	[6]
metadata_pool	5
inline_data	disabled
balancer
standby_count_wanted	1
[mds.ceph-mon3{0:64919} state up:active seq 21 addr [v2:10.0.0.53:6800/3743360414,v1:10.0.0.53:6801/3743360414] compat {c=[1],r=[1],i=[7ff]}]
[mds.ceph-mgr2{1:64933} state up:active seq 10 addr [v2:10.0.0.55:6802/2745364032,v1:10.0.0.55:6803/2745364032] compat {c=[1],r=[1],i=[7ff]}]

3.7 mds节点切换流程

宕机-->replay(重新心跳检测)-->resolve(再次心跳检测)-->reconnect(重新连接)-->rejoin(备节点加入)-->active(主备切换完成)

[root@ceph-mgr1 ~]#tail -100 /var/log/ceph/ceph-mds.ceph-mgr1.log 
2023-09-25T03:40:11.157+0800 7fec41d01780  0 set uid:gid to 64045:64045 (ceph:ceph)
2023-09-25T03:40:11.157+0800 7fec41d01780  0 ceph version 16.2.14 (238ba602515df21ea7ffc75c88db29f9e5ef12c9) pacific (stable), process ceph-mds, pid 46097
2023-09-25T03:40:11.157+0800 7fec41d01780  1 main not setting numa affinity
2023-09-25T03:40:11.157+0800 7fec41d01780  0 pidfile_write: ignore empty --pid-file
2023-09-25T03:40:11.165+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 126 from mon.1
2023-09-25T03:40:11.221+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 127 from mon.1
2023-09-25T03:40:11.221+0800 7fec3d49e700  1 mds.ceph-mgr1 Monitors have assigned me to become a standby.
2023-09-25T03:40:21.665+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 129 from mon.1
2023-09-25T03:40:21.669+0800 7fec3d49e700  1 mds.1.129 handle_mds_map i am now mds.1.129
2023-09-25T03:40:21.669+0800 7fec3d49e700  1 mds.1.129 handle_mds_map state change up:standby --> up:replay			# 重新心跳检测
2023-09-25T03:40:21.669+0800 7fec3d49e700  1 mds.1.129 replay_start
2023-09-25T03:40:21.669+0800 7fec3d49e700  1 mds.1.129  waiting for osdmap 344 (which blocklists prior instance)
2023-09-25T03:40:21.681+0800 7fec36c91700  0 mds.1.cache creating system inode with ino:0x101
2023-09-25T03:40:21.681+0800 7fec36c91700  0 mds.1.cache creating system inode with ino:0x1
2023-09-25T03:40:21.681+0800 7fec35c8f700  1 mds.1.129 Finished replaying journal
2023-09-25T03:40:21.681+0800 7fec35c8f700  1 mds.1.129 making mds journal writeable
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 130 from mon.1
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129 handle_mds_map i am now mds.1.129
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129 handle_mds_map state change up:replay --> up:resolve			# 再次心跳检测
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129 resolve_start
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129 reopen_log
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129  recovery set is 0
2023-09-25T03:40:22.669+0800 7fec3d49e700  1 mds.1.129  recovery set is 0
2023-09-25T03:40:22.673+0800 7fec3d49e700  1 mds.ceph-mgr1 parse_caps: cannot decode auth caps buffer of length 0
2023-09-25T03:40:22.673+0800 7fec3d49e700  1 mds.1.129 resolve_done
2023-09-25T03:40:23.673+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 131 from mon.1
2023-09-25T03:40:23.673+0800 7fec3d49e700  1 mds.1.129 handle_mds_map i am now mds.1.129
2023-09-25T03:40:23.673+0800 7fec3d49e700  1 mds.1.129 handle_mds_map state change up:resolve --> up:reconnect		# 重新连接
2023-09-25T03:40:23.673+0800 7fec3d49e700  1 mds.1.129 reconnect_start
2023-09-25T03:40:23.673+0800 7fec3d49e700  1 mds.1.129 reconnect_done
2023-09-25T03:40:24.677+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 132 from mon.1
2023-09-25T03:40:24.677+0800 7fec3d49e700  1 mds.1.129 handle_mds_map i am now mds.1.129
2023-09-25T03:40:24.677+0800 7fec3d49e700  1 mds.1.129 handle_mds_map state change up:reconnect --> up:rejoin		# 备份节点加入
2023-09-25T03:40:24.677+0800 7fec3d49e700  1 mds.1.129 rejoin_start
2023-09-25T03:40:24.677+0800 7fec3d49e700  1 mds.1.129 rejoin_joint_start
2023-09-25T03:40:24.681+0800 7fec3d49e700  1 mds.1.129 rejoin_done
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.ceph-mgr1 Updating MDS map to version 133 from mon.1
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.1.129 handle_mds_map i am now mds.1.129
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.1.129 handle_mds_map state change up:rejoin --> up:active			# 主备切换完成
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.1.129 recovery_done -- successful recovery!
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.1.129 active_start
2023-09-25T03:40:25.682+0800 7fec3d49e700  1 mds.1.129 cluster recovered.
2023-09-25T03:53:09.090+0800 7fec3eca1700 -1 received  signal: Terminated from /sbin/init maybe-ubiquity  (PID: 1) UID: 0
2023-09-25T03:53:09.090+0800 7fec3eca1700 -1 mds.ceph-mgr1 *** got signal Terminated ***
2023-09-25T03:53:09.090+0800 7fec3eca1700  1 mds.ceph-mgr1 suicide! Wanted state up:active
2023-09-25T03:53:12.826+0800 7fec3eca1700  1 mds.1.129 shutdown: shutting down rank 1
2023-09-25T03:53:12.826+0800 7fec3d49e700  0 ms_deliver_dispatch: unhandled message 0x55f2498bc1c0 osd_map(348..348 src has 1..348) v4 from mon.1 v2:10.0.0.52:3300/0
2023-09-25T03:53:12.826+0800 7fec3d49e700  0 ms_deliver_dispatch: unhandled message 0x55f24a61b6c0 mdsmap(e 138) v2 from mon.1 v2:10.0.0.52:3300/0
2023-09-25T03:53:12.826+0800 7fec3d49e700  0 ms_deliver_dispatch: unhandled message 0x55f24a61a1a0 mdsmap(e 4294967295) v2 from mon.1 v2:10.0.0.52:3300/0
2023-09-25T03:53:12.826+0800 7fec3d49e700  0 ms_deliver_dispatch: unhandled message 0x55f24a5a3d40 mdsmap(e 139) v2 from mon.1 v2:10.0.0.52:3300/0
2023-09-25T03:53:12.826+0800 7fec3d49e700  0 ms_deliver_dispatch: unhandled message 0x55f24a5ce000 mdsmap(e 140) v2 from mon.1 v2:10.0.0.52:3300/0

标签:root,2023,0800,ceph,mds,MDS,CephFS,data
From: https://www.cnblogs.com/areke/p/17730294.html

相关文章

  • 【北亚企安数据恢复】Ceph分布式存储基本架构&概念&Ceph数据恢复流程
    Ceph存储基本架构:Ceph存储可分为块存储,对象存储和文件存储。Ceph基于对象存储,对外提供三种存储接口,故称为统一存储。Ceph的底层是RADOS(分布式对象存储系统),RADOS由两部分组成:OSD和MON。MON负责监控整个集群,维护集群的健康状态,维护展示集群状态的各种图表,如OSDMap、MonitorMap、......
  • ceph(三)实现ceph块存储的挂载及存储空间的动态伸缩
    1.客户端使用普通账户挂载并使用RBDRBD(RADOSBlockDevices)即块存储设备,RBD可以为KVM、VMware等虚拟化技术和云服务(OpenStack、kubernetes)提供高性能和无限可扩展的存储后端,客户端基于librbd库即可将RADOS存储集群用作块设备,不过,用于rbd的存储池需要事先启用rbd功能并进行初始化......
  • ceph(四)ceph集群管理、pg常见状态总结
    1.ceph常见管理命令总结1.1只显示存储池cephosdpoolls示例$cephosdpoollsdevice_health_metricsmypoolmyrbd1rbd-data11.2列出存储池并显示idcephosdlspools示例$cephosdlspools1device_health_metrics2mypool3myrbd14rbd-data11.3查看p......
  • UOS安装部署Ceph集群(二)
    本篇文章介绍Ceph如何使用,包括创建/使用:块存储(rbd)、文件存储(cephfs)、对象存储(rgw)前4步详细介绍,请点击链接跳转。实验过程:  1、Ceph架构图  UOS安装部署Ceph集群(一)_[ceph_deploy][error]runtimeerror:bootstrap-osd_小时候很牛、的博客  2、实验环境说明  UOS安装部......
  • UOS安装部署Ceph集群(一)
    本篇文章介绍Ceph集群部署,包括:实验环境说明、集群基础环境、创建Ceph集群块存储(rbd)、文件存储(cephfs)、对象存储(rgw)详细介绍,请点击链接跳转。Ceph产品这里不就介绍了,网上有很多资料可查。直接上实验~实验过程:  1、Ceph架构图  2、实验环境说明  3、Ceph集群基础环境调试......
  • Ceph Enable Dashboard
    本次演示环境配置如下:hostnameIProlesnode01.srv.world192.168.10.101ObjectStorage;MonitorDaemon;ManagerDaemonnode02.srv.world192.168.10.102ObjectStoragenode03.srv.world192.168.10.103ObjectStoragedlp.srv.world192.168.10.142clientwww.srv.world192.168.10.140RAD......
  • Ceph Object Gateway
    本次演示环境配置如下:hostnameIProlesnode01.srv.world192.168.10.101ObjectStorage;MonitorDaemon;ManagerDaemonnode02.srv.world192.168.10.102ObjectStoragenode03.srv.world192.168.10.103ObjectStoragedlp.srv.world192.168.10.142clientwww.srv.world192.168.10.140RAD......
  • 【ceph运维】删除mds
    删除mds1.集群状态:[root@ceph02~]#ceph-scluster:id:9de7d2fb-245a-4b9c-8c1f-b452110fb61fhealth:HEALTH_OKservices:mon:1daemons,quorumceph01mgr:ceph01(active)mds:cephfs-1/1/1up{0=ceph02=up:active},1up:standby......
  • Ceph文件系统使用
    本次演示环境如下:hostnameIProlesnode01.srv.world192.168.10.101ObjectStorage;MonitorDaemon;ManagerDaemonnode02.srv.world192.168.10.102ObjectStoragenode03.srv.world192.168.10.103ObjectStoragedlp.srv.world192.168.10.142client1.在dlp节点安装所需的软件包[root@n......
  • 配置Ceph #2
    在node01节点为每个节点配置OSD(对象存储设备)。1)为各个节点配置firewalld[root@node01~]#forNODEinnode01node02node03dossh$NODE"firewall-cmd--add-service=ceph;firewall-cmd--runtime-to-permanent"donesuccesssuccesssuccesssuccessWarning:ALREADY......