首页 > 其他分享 >ceph分布式存储安装(ceph-deploy)

ceph分布式存储安装(ceph-deploy)

时间:2022-12-21 17:26:06浏览次数:41  
标签:INFO node cli deploy ceph mon 分布式

ceph学习

ceph简介和特性

ceph时一个多版本存储系统,它把每一个待管理的数据量切分为一到多个固定带下的对象数据,并以其为原子

单元完成数据存取。

对象数据的地处存储服务是由多个主机组成的存储集群,该集群也被称为RADOS存储集群即可靠、自动化、分布式对象存储系统。
ceph通过内部crush算法,实时方式计算除一个文件应该存储到那个存储对象里面,从而实现快速查找对象的一种方式。

librados是RADOS存储集群的api,它支持c、c++、java、python、ruby和PHP等编程语言

功能强大:ceph能够同时提供对象存储、块存储和文件存储三种存储服务的统一存储架构

可扩展性:ceph通过crush算法的寻址操作,有相当强大的扩展性。

高可用性:cpeh数据副本数量可以有管理员自定义,并通过crush算法指定副本的物理存储位置以分隔故障域,支持数据强一致性的特性也是ceph具有了高可靠性,可以忍受多种故障场景并自动尝试并进行修复

LIBRADOS   --通过自编程的方式实现数据的存储能力
RADOSGW    --通过标准的RESTFULL接口,提供一种云存储服务
RBD        --将ceph提供的空间,模拟成一个个独立的块设备,当ceph环境部署完后,服务端准备好rbd接口
CFS        --通过一个标准的文件系统接口来进行数据的存储
ceph组件
组件 解析
Monitors ceph Monitors守护进程(ceph-mon)维护集群状态的映射,包括监视器映射,管理器映射、OSD映射,MDS映射和CRUSH映射。监视器还负责管理守护进程和客户端的身份验证,通常至少需要三个监视器才能实现冗余和高可用性,基于paxos协议实现节点间的信息同步
Managers ceph管理器(守护进程ceph-mgr)负责跟踪运行时指标和ceph集群的当前状态,包括存储利用率,当前性能指标和系统负载。高可用性通常至少需要两个管理器,基于raft协议实现节点间的信息同步
cephOSDS ceph OSD(对象存储守护进程ceph-osd)存储数据,处理数据复制,回复,重新平衡,并通过检查其他osd守护进程的心跳来向ceph监视器和管理器提供一写监控信息,通常需要3个ceph OSD 来实现冗余和高可用性,本质上osd就是一个个host主机上存储磁盘
MDSs ceph元数据服务器,代表ceph文件系统存储元数据。ceph元数据服务器允许POSIX文件系统用户执行基本命令(ls, find等),而不会给ceph集群带来巨大负担
ceph网络模型

ceph生产环境一般分为两个网段:

公有网络:用于用户的数据通信

集群网络:用于集群内部的管理通信

ceph版本

x.0.z-开发版

x.1.z-候选版

x.2.z-稳定、修正版

版本信息查看

https://docs.ceph.com/en/latest/releases/
https://docs.ceph.org.cn/start/intro/
ceph部署
部署方式 特点解析
cephadm 使用容器和systemd安装和管理ceph集群,并于CLI和仪表板GUI紧密集成,只支持octopus以后的新版本,当前官方推荐
ceph-deploy 一个基于python实现的快速部署集群的工具,此工具从Nautilus版后不再支持和测试,建议使用此工具 Nautilus 之前的旧版本。不支持 RHEL8、CentOS 8 或更新的操作系统。
rook 在k8s上运行ceph集群,同时还支持通过K8Sapi管理存储资源和配置,只支持 Nautilus 以后的新版本,不支持 RHEL8、CentOS 8 或更新的操作系统。
ceph-ansible 使用ansible部署和管理ceph集群,应用较广,但是从Nautilus 和octopus版没有集成对应的api,所以较新的管理功能和仪表板集成不可用
ceph-salt 使用salt和cephadm安装ceph
ceph-mon 使用Juju(模型驱动的k8s生命周期管理器OLM)安装ceph
puppet-ceph 通过puppet安装ceph
二进制 手工安装
windows图形 在windows主机上通过图形操作就可以进行部署
环境准备
#环境规划
系统ubuntu18.04 

下载连接
https://releases.ubuntu.com/18.04/ubuntu-18.04.6-desktop-amd64.iso

公有网络:10.0.0.0/24
集群网络:192.168.160.0/24

192.168.160.128
192.168.160.129
192.168.160.130
192.168.160.131

10.0.0.128
10.0.0.129
10.0.0.130
10.0.0.131


#导入key
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
#添加apt源
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
#更新源
sudo apt update


#用脚本实现批量创建用户
cat > create_cephadm.sh <<EOF
#!/bin/bash
# 设定普通用户
useradd -m -s /bin/bash cephadm
echo cephadm:123456 | chpasswd
echo "cephadm ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/cephadm
chmod 0440 /etc/sudoers.d/cephadm
EOF
#批量执行
for i in {128..131}; do ssh [email protected].$i bash < create_cephadm.sh ; done

#hosts用户名解析
所有节点都需要添加
192.168.160.128 node
192.168.160.129 node-1
192.168.160.130 node-2
192.168.160.131 mon


#ssh免密
所有节点都需要操作
root@node:~# su - ceph
ceph@node:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:uTyNJcrHhxWjSsqbwSrgernfbnxpZbqs9WTNkjoc5As cephadm@node
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|          o      |
|        .o o     |
|      .oS o      |
|.  o +E=oOo+     |
|o  .=.+oXBB o    |
|..o. =o+OB .     |
|oooo+o+++o.      |
+----[SHA256]-----+
cephadm@node:~$ ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ceph/.ssh/id_rsa.pub"
The authenticity of host '192.168.160.128 (192.168.160.128)' can't be established.
ECDSA key fingerprint is SHA256:XhYuJgB5QONz1yKl8gPQ9qwji3Mlcj+j8LU4D2/LFd4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
cephadm@node:~$ ssh-copy-id [email protected]
cephadm@node:~$ ssh-copy-id [email protected]
ceph-deploy工具安装
安装deploy工具
root@mon:~# apt install  ceph-deploy
初始化mom节点准备
root@mon:~# su - ceph
cephadm@mon:~$ mkdir ceph-cluster/ #保存集群初始化配置信息
cephadm@mon:~$ cd ceph-cluster/
cephadm@mon:~/ceph-cluster$

使用
ceph-deploy --help
cpeh-deploy mon --help
new:开始部署一个新到ceph存储集群,并生成cluster.conf集群配置文件和keyring认证文件
install:在远程主机上安装ceph相关软件包,也可以通过--release指定安装版本
rgw:管理rgw守护程序(radosgw,对象存储网关)
mgr:管理mgr守护程序(ceph-mgr,ceph manager daemonceph管理器守护程序)
mds:管理mds守护程序(ceph metadata serverceph源数据服务器)
mon:管理mon守护程序(ceph-mon,ceph监视器)
gatherkeys:从指定获取提供新节点的验证keys,这些key会在添加新到mon/osd/md加入的时候使用
disk:管理远程主机磁盘
osd:在原从主机准备数据磁盘,即将指定远程主机的指定磁盘添加到ceph集群作为osd使用
repo:远程主机仓库
admin:推送ceph集群配置文件和client.admin文件到远程主机
config:将ceph.conf配置文件推送到远程主机或从远程主机拷贝
uninstall:从远程主机删除安装包
purgedata:从/var/lib/ceph 删除ceph数据,回删除/etc/ceph下的内容
forgetkeys:从本地主机删除所有的验证keying,包括client.admin,monitor,bootstrap等认证文件
pkg:管理远端主机的安装包
calamari:安装并配置一个calamari web节点,calamari是一个web监控节点

所有节点安装python2
apt install python2.7 -y
所有节点链接python2
root@nmon:~# ln -sv /usr/bin/python2.7 /usr/bin/python2
'/usr/bin/python2' -> '/usr/bin/python2.7'

初始化
cephadm@mon:~$ mkdir cluster && cd cluster
cephadm@mon:~/cluster$
cephadm@mon:~/cluster$ ls
cephadm@mon:~/cluster$ ls
cephadm@mon:~/cluster$
cephadm@mon:~/cluster$ ceph-deploy new --public-network 192.168.160.0/24 --cluster-network 10.0.0.0/24 mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy new --public-network 192.168.160.0/24 --cluster-network 10.0.0.0/24 mon
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa94bfb6dc0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['mon']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fa949522b50>
[ceph_deploy.cli][INFO  ]  public_network                : 192.168.160.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 10.0.0.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[mon][DEBUG ] connection detected need for sudo
[mon][DEBUG ] connected to host: mon
#初始化 mon 节点
cephadm@mon:~/cluster$ceph-deploy --overwrite-conf mon create-initial
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon.asok mon_status
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon/keyring auth get client.admin
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon/keyring auth get client.bootstrap-mds
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon/keyring auth get client.bootstrap-mgr
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon/keyring auth get client.bootstrap-osd
[mon][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpi_co7T
cephadm@mon:~/cluster$
cephadm@mon:~/cluster$ ls -l
总用量 48
-rw------- 1 cephadm cephadm   113 12月 21 10:47 ceph.bootstrap-mds.keyring
-rw------- 1 cephadm cephadm   113 12月 21 10:47 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadm cephadm   113 12月 21 10:47 ceph.bootstrap-osd.keyring
-rw------- 1 cephadm cephadm   113 12月 21 10:47 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadm cephadm   151 12月 21 10:47 ceph.client.admin.keyring
-rw-rw-r-- 1 cephadm cephadm   260 12月 21 10:26 ceph.conf
-rw-rw-r-- 1 cephadm cephadm 16495 12月 21 10:47 ceph-deploy-ceph.log
-rw------- 1 cephadm cephadm    73 12月 21 10:26 ceph.mon.keyring
cephadm@mon:~/cluster$
#将密钥推送到/etc/ceph/目录下
cephadm@mon:~/cluster$ ceph-deploy admin mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy admin mon
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4366a0b050>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['mon']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f4367384ad0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to mon
[mon][DEBUG ] connection detected need for sudo
[mon][DEBUG ] connected to host: mon
[mon][DEBUG ] detect platform information from remote host
[mon][DEBUG ] detect machine type
[mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

#查看集群信息,无法找到密钥,因为没有权限
cephadm@mon:~/cluster$ ceph -s
2022-12-21T10:59:35.378+0800 7f87cab6f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-12-21T10:59:35.378+0800 7f87cab6f700 -1 AuthRegistry(0x7f87c405bc38) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2022-12-21T10:59:35.382+0800 7f87cab6f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-12-21T10:59:35.382+0800 7f87cab6f700 -1 AuthRegistry(0x7f87bc004a50) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2022-12-21T10:59:35.382+0800 7f87cab6f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2022-12-21T10:59:35.382+0800 7f87cab6f700 -1 AuthRegistry(0x7f87cab6dff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[errno 2] RADOS object not found (error connecting to the cluster)
#安装acl软件包
cephadm@mon:~/cluster$ sudo apt install  acl -y
#设置cephadm给密钥文件读权限
cephadm@mon:~/cluster$ sudo setfacl -m u:cephadm:r /etc/ceph/ceph.client.admin.keyring
#查看集群
cephadm@mon:~/cluster$ ceph -s
  cluster:
    id:     87dcbebf-73ba-4d30-b620-9653b2446c76
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim

  services:
    mon: 1 daemons, quorum mon (age 14m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:
#关掉集群告警
cephadm@mon:~/cluster$ ceph config set mon auth_allow_insecure_global_id_reclaim false

cephadm@mon:~/cluster$ ceph -s
  cluster:
    id:     87dcbebf-73ba-4d30-b620-9653b2446c76
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum mon (age 16m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

cephadm@mon:~/cluster$

添加mgr节点

#添加mgr节点
cephadm@mon:~/cluster$ ceph-deploy install --mgr node
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy install --mgr node
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff306795910>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7ff3070c5a50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : True
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['node']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts node
[ceph_deploy.install][DEBUG ] Detecting platform for host node ...
The authenticity of host 'node (192.168.160.128)' can't be established.
ECDSA key fingerprint is SHA256:XhYuJgB5QONz1yKl8gPQ9qwji3Mlcj+j8LU4D2/LFd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node' (ECDSA) to the list of known hosts.
[node][DEBUG ] connection detected need for sudo
[node][DEBUG ] connected to host: node
[node][DEBUG ] detect platform information from remote host
[node][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[node][INFO  ] installing Ceph on node
[node][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[node][DEBUG ] 正在读取软件包列表...
[node][DEBUG ] 正在分析软件包的依赖关系树...
[node][DEBUG ] 正在读取状态信息...
#mgr节点查看ceph包
root@node:~# dpkg -l |grep ceph
ii  ceph-base                                  16.2.10-1bionic                                 amd64        common ceph daemon libraries and management tools
ii  ceph-common                                16.2.10-1bionic                                 amd64        common utilities to mount and interact with a ceph storage cluster
ii  ceph-mgr                                   16.2.10-1bionic                                 amd64        managerfor the ceph distributed storage system
ii  ceph-mgr-modules-core                      16.2.10-1bionic                                 all          ceph manager modules which are always enabled
ii  libcephfs2                                 16.2.10-1bionic                                 amd64        Ceph distributed file system client library
ii  python3-ceph-argparse                      16.2.10-1bionic                                 all          Python 3 utility libraries for Ceph CLI
ii  python3-ceph-common                        16.2.10-1bionic                                 all          Python 3 utility libraries for Ceph
ii  python3-cephfs                             16.2.10-1bionic                                 amd64        Python 3 libraries for the Ceph libcephfs library
#查看集群
root@mon:~# ceph -s
  cluster:
    id:     87dcbebf-73ba-4d30-b620-9653b2446c76
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum mon (age 35s)
    mgr: node(active, since 15s)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

添加osd节点

#安装osd
cephadm@mon:~/cluster$ ceph-deploy install --release pacific --osd node-1 node-2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy install --release pacific --osd node-1 node-2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1151dac910>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f11526dca50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['node-1', 'node-2']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : True
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : pacific
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version pacific on cluster ceph hosts node-1 node-2
[ceph_deploy.install][DEBUG ] Detecting platform for host node
#查看osd节点硬盘信息
cephadm@mon:~/cluster$ ceph-deploy disk list node-1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list node-1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc56c9fa280>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['node-1']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fc56c9cf350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[node-1][DEBUG ] connection detected need for sudo
[node-1][DEBUG ] connected to host: node-1
[node-1][DEBUG ] detect platform information from remote host
[node-1][DEBUG ] detect machine type
[node-1][DEBUG ] find the location of an executable
[node-1][INFO  ] Running command: sudo fdisk -l
[node-1][INFO  ] Disk /dev/sdb:20 GiB,21474836480 字节,41943040 个扇区
[node-1][INFO  ] Disk /dev/sda:50 GiB,53687091200 字节,104857600 个扇区
[node-1][INFO  ] Disk /dev/sdc:20 GiB,21474836480 字节,41943040 个扇区
cephadm@mon:~/cluster$ ceph-deploy disk list node-2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list node-2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f79b9041280>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['node-2']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f79b9016350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[node-2][DEBUG ] connection detected need for sudo
[node-2][DEBUG ] connected to host: node-2
[node-2][DEBUG ] detect platform information from remote host
[node-2][DEBUG ] detect machine type
[node-2][DEBUG ] find the location of an executable
[node-2][INFO  ] Running command: sudo fdisk -l
[node-2][INFO  ] Disk /dev/sda:50 GiB,53687091200 字节,104857600 个扇区
[node-2][INFO  ] Disk /dev/sdc:20 GiB,21474836480 字节,41943040 个扇区
[node-2][INFO  ] Disk /dev/sdb:20 GiB,21474836480 字节,41943040 个扇区
cephadm@mon:~/cluster$
#添加osd硬盘
cephadm@mon:~/cluster$ ceph-deploy --overwrite-conf osd create node-1 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf osd create node-1 --data /dev/sdb
[node-1][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[node-1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[node-1][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[node-1][INFO  ] checking OSD status...
[node-1][DEBUG ] find the location of an executable
[node-1][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node-1 is now ready for osd use.
#查看osd磁盘
cephadm@mon:~/cluster$ ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  node-1  4896k  19.9G      0        0       0        0   exists,up
cephadm@mon:~/cluster$ ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  node-1  4896k  19.9G      0        0       0        0   exists,up
cephadm@mon:~/cluster$ ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE    RAW USE  DATA    OMAP  META     AVAIL   %USE  VAR   PGS  STATUS
 0    hdd  0.01949   1.00000  20 GiB  4.8 MiB  88 KiB   0 B  4.7 MiB  20 GiB  0.02  1.00    0      up
                       TOTAL  20 GiB  4.8 MiB  88 KiB   0 B  4.7 MiB  20 GiB  0.02
MIN/MAX VAR: 1.00/1.00  STDDEV: 0
#继续添加硬盘
cephadm@mon:~/cluster$ ceph-deploy --overwrite-conf osd create node-1 --data /dev/sdc
cephadm@mon:~/cluster$ ceph-deploy --overwrite-conf osd create node-2 --data /dev/sdb
cephadm@mon:~/cluster$ ceph-deploy --overwrite-conf osd create node-2 --data /dev/sdc
cephadm@mon:~/cluster$ ceph -s
  cluster:
    id:     87dcbebf-73ba-4d30-b620-9653b2446c76
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum mon (age 2h)
    mgr: node(active, since 2h)
    osd: 4 osds: 4 up (since 23s), 4 in (since 30s); 1 remapped pgs

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   21 MiB used, 80 GiB / 80 GiB avail
    pgs:     1 active+clean
cephadm@mon:~/cluster$ ceph osd ls
0
1
2
3
cephadm@mon:~/cluster$
root@mon:~# ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  node-1  5360k  19.9G      0        0       0        0   exists,up
 1  node-1  5296k  19.9G      0        0       0        0   exists,up
 2  node-2  5296k  19.9G      0        0       0        0   exists,up
 3  node-2  5228k  19.9G      0        0       0        0   exists,up

添加其他mon节点

cephadm@mon:~/cluster$ ceph-deploy mon add mon-1

添加其他mgr节点

root@node-2:~# apt install ceph-mgr
cephadm@mon:~/cluster$ ceph-deploy mgr create node-2


关注小张的知识杂货铺,让我们一起学习一起进步
有需要可以微信公众号后台回复“ceph”获取相关学习资料

标签:INFO,node,cli,deploy,ceph,mon,分布式
From: https://www.cnblogs.com/xiaozhang1995/p/16996666.html

相关文章