首页 > 其他分享 >ceph-3

ceph-3

时间:2023-01-10 23:57:27浏览次数:60  
标签:weight deploy ceph cluster cephadmin root

对象存储网关 RadosGW: https://docs.ceph.com/en/quincy/radosgw/ 特征 数据不需要放置在目录层次结构中,而是存在于平面地址空间内的同一级别 应用通过唯一地址来识别每个单独的数据对象 每个对象可包含有助于检索的元数据 通过 RESTful API 在应用级别(而非用户级别)进行访问.   做好监控,存储的可用性

 

 s3

https://aws.amazon.com/cn/s3/

工作原理

Amazon Simple Storage Service (Amazon S3) 是一种对象存储服务,提供行业领先的可扩展性、数据可用性、安全性和性能。各种规模和行业的客户可以为几乎任何使用案例存储和保护任意数量的数据,例如数据湖、云原生应用程序和移动应用程序。借助高成本效益的存储类和易于使用的管理功能,您可以优化成本、组织数据并配置精细调整过的访问控制,从而满足特定的业务、组织和合规性要求。 RadosGW 对象存储网关简介: RadosGW 是对象存储(OSS,Object Storage Service)的一种访问实现方式,RADOS 网关也 称为 Ceph 对象网关、RadosGW、RGW,是一种服务,使客户端能够利用标准对象存储 API 来访问 Ceph 集群,它支持 AWS S3 和 Swift API,在 ceph 0.8 版本之后使用 Civetweb (https://github.com/civetweb/civetweb) 的 web 服务器来响应 api 请求,客户端使用 http/https 协议通过 RESTful API 与 RGW 通信,而 RGW 则通过 librados 与 ceph 集群通 信,RGW 客户端通过 s3 或者 swift api 使用 RGW 用户进行身份验证,然后 RGW 网关代 表用户利用 cephx 与 ceph 存储进行身份验证。   S3 由 Amazon 于 2006 年推出,全称为 Simple Storage Service,S3 定义了对象存储,是对 象存储事实上的标准,从某种意义上说,S3 就是对象存储,对象存储就是 S3,它是对象存 储市场的霸主,后续的对象存储都是对 S3 的模仿   RadosGW 存储特点: 通过对象存储网关将数据存储为对象,每个对象除了包含数据,还包含数据自身的元数据。 对象通过 Object ID 来检索,无法通过普通文件系统的挂载方式通过文件路径加文件名称操 作来直接访问对象,只能通过 API 来访问,或者第三方客户端(实际上也是对 API 的封装)。 对象的存储不是垂直的目录树结构,而是存储在扁平的命名空间中,Amazon S3 将这个扁 平命名空间称为 bucket,而 swift 则将其称为容器。 无论是 bucket 还是容器,都不能再嵌套(在 bucket 不能再包含 bucket)。 bucket 需要被授权才能访问到,一个帐户可以对多个 bucket 授权,而权限可以不同。 方便横向扩展、快速检索数据。 不支持客户端挂载,且需要客户端在访问的时候指定文件名称。 不是很适用于文件过于频繁修改及删除的场景。   ceph 使用 bucket 作为存储桶(存储空间),实现对象数据的存储和多用户隔离,数据存储在 bucket 中,用户的权限也是针对 bucket 进行授权,可以设置用户对不同的 bucket 拥有不同的权限,以实现权限管理。   bucket 特性: 存储空间(bucket)是用于存储对象(Object)的容器,所有的对象都必须隶属于某个存储空 间,可以设置和修改存储空间属性用来控制地域、访问权限、生命周期等,这些属性设置直 接作用于该存储空间内所有对象,因此可以通过灵活创建不同的存储空间来完成不同的管理 功能。 同一个存储空间的内部是扁平的,没有文件系统的目录等概念,所有的对象都直接隶属于其 对应的存储空间。 每个用户可以拥有多个存储空间 存储空间的名称在 OSS 范围内必须是全局唯一的,一旦创建之后无法修改名称。 存储空间内部的对象数目没有限制。   bucket 命名规范: https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucketnamingrules.html 只能包括小写字母、数字和短横线(-)。 必须以小写字母或者数字开头和结尾。 长度必须在 3-63 字节之间。 存储桶名称不能使用用 IP 地址格式。 Bucket 名称必须全局唯一。   radosgw 架构图

 

 

  radosgw 逻辑图:

 

对象存储访问对比: Amazon S3:提供了 user、bucket 和 object 分别表示用户、存储桶和对象,其中 bucket 隶属于 user,可以针对 user 设置不同 bucket 的名称空间的访问权限,而且不同用户允许 访问相同的 bucket。 OpenStack Swift:提供了 user、container 和 object 分别对应于用户、存储桶和对象,不 过它还额外为 user提供了父级组件 account, account 用于表示一个项目或租户(OpenStack 用户),因此一个 account 中可包含一到多个 user,它们可共享使用同一组 container,并为 container 提供名称空间。 RadosGW:提供了 user、subuser、bucket 和 object,其中的 user 对应于 S3 的 user,而 subuser 则对应于 Swift 的 user,不过 user 和 subuser 都不支持为 bucket 提供名称空间, 因此,不同用户的存储桶也不允许同名;不过,自 Jewel 版本起,RadosGW 引入了 tenant (租户)用于为 user 和 bucket 提供名称空间,但它是个可选组件,RadosGW 基于 ACL 为不同的用户设置不同的权限控制,如: Read 读权限 Write 写权限 Readwrite 读写权限 full-control 全部控制权限   部署 RadosGW 服务: 将 ceph-mgr1、ceph-mgr2 服务器部署为高可用的 radosGW 服务 安装 radosgw 服务并初始化: Ubuntu root@ceph-mgr1:~# apt install radosgw Centos: [root@ceph-mgr1 ~]# yum install ceph-radosgw [root@ceph-mgr2 ~]# yum install ceph-radosgw   启动之后会监听7480

 

 

root@ceph-mgr2:~# apt install radosgw

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy rgw create ceph-mgr2

 

 

root@ceph-client:~# apt install keepalived

root@ceph-client:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

 

root@ceph-client:~# apt install haproxy

root@ceph-client:~# cat /etc/haproxy/haproxy.cfg

 

 访问VIP

 通过域名访问

验证 radosgw 服务状态: 验证 radosgw 服务进程:

radosgw 的存储池类型: cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool ls

 

查看默认 radosgw 的存储池信息: cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin zone get --rgw-zone=default --rgw-zonegroup=default  (获取默认空间的数据) #rgw pool 信息: .rgw.root: 包含 realm(领域信息),比如 zone 和 zonegroup。 default.rgw.log: 存储日志信息,用于记录各种 log 信息。 default.rgw.control: 系统控制池,在有数据更新时,通知其它 RGW 更新缓存。 default.rgw.meta: 元数据存储池,通过不同的名称空间分别存储不同的 rados 对象,这 些名称空间包括⽤⼾UID 及其 bucket 映射信息的名称空间 users.uid、⽤⼾的密钥名称空间 users.keys、⽤⼾的 email 名称空间 users.email、⽤⼾的 subuser 的名称空间 users.swift, 以及 bucket 的名称空间 root 等。 default.rgw.buckets.index: 存放 bucket 到 object 的索引信息。 default.rgw.buckets.data: 存放对象的数据。 default.rgw.buckets.non-ec #数据的额外信息存储池 default.rgw.users.uid: 存放用户信息的存储池。 default.rgw.data.root: 存放 bucket 的元数据,结构体对应 RGWBucketInfo,比如存放桶名、桶 ID、data_pool 等。    

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool get default.rgw.buckets.data crush_rule
Error ENOENT: unrecognized pool 'default.rgw.buckets.data'  没有权限查看

查看默认默认的副本数

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool get default.rgw.buckets.data size
Error ENOENT: unrecognized pool 'default.rgw.buckets.data'

默认的 pg 数量

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd pool get default.rgw.buckets.data pgp_num
Error ENOENT: unrecognized pool 'default.rgw.buckets.data'

 

RGW 存储池功能: cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd lspools

 

 验证 RGW zone 信息:

 cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin zone get --rgw-zone=default

访问 radosgw 服务:

 

radosgw http 高可用: 自定义 http 端口: 配置文件可以在 ceph deploy 服务器修改然后统一推送,或者单独修改每个 radosgw 服务 器的配置为统一配置,然后重启 RGW 服务。 https://docs.ceph.com/en/latest/radosgw/frontends/ [root@ceph-mgr2 ~]# vim /etc/ceph/ceph.conf #在最后面添加针对当前节点的自定义配置如下: [client.rgw.ceph-mgr2] rgw_host = ceph-mgr2 rgw_frontends = civetweb port=9900 #重启服务 [root@ceph-mgr2 ~]# systemctl restart [email protected] hapoxy也需要要改端口并重启 radosgw https: 在 rgw 节点生成签名证书并配置 radosgw 启用 SSL 自签名证书:  

root@ceph-mgr2:~# cd /etc/ceph/
root@ceph-mgr2:/etc/ceph# mkdir certs
root@ceph-mgr2:/etc/ceph# cd certs/
root@ceph-mgr2:/etc/ceph/certs# openssl genrsa -out civetweb.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
............+++++
.....+++++
e is 65537 (0x010001)

root@ceph-mgr2:/etc/ceph/certs# cd /root
root@ceph-mgr2:~# openssl rand -writerand .rnd
root@ceph-mgr2:~# cd -
/etc/ceph/certs
root@ceph-mgr2:/etc/ceph/certs# openssl req -new -x509 -key civetweb.key -out civetweb.crt -subj "/CN=rgw.awem.com"

root@ceph-mgr2:/etc/ceph/certs# cat civetweb.key civetweb.crt > civetweb.pem

SSL 配置: root@ceph-mgr2:/etc/ceph/certs# vim /etc/ceph/ceph.conf

[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/civetweb.pem"

root@ceph-mgr2:/etc/ceph/certs# systemctl restart [email protected]

同步rgw证书

 root@ceph-mgr2:/etc/ceph/certs# vim /etc/ceph/ceph.conf

root@ceph-mgr2:/etc/ceph# scp ceph.conf 10.4.7.137:/etc/ceph

[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/civetweb.pem"

[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/civetweb.pem"

root@ceph-mgr2:/etc/ceph/certs# scp * 10.4.7.137:/etc/ceph/certs

 

 root@ceph-client:~# vi /etc/haproxy/haproxy.cfg

  

 

 root@ceph-mgr2:/etc/ceph# cat ceph.conf

 

root@ceph-mgr2:/etc/ceph# mkdir -p /var/log/radosgw/

root@ceph-mgr2:/etc/ceph# chown ceph.ceph /var/log/radosgw/ -R

root@ceph-mgr2:/etc/ceph# curl -k https://10.4.7.138:9443

客户端(s3cmd)测试数据读写: https://github.com/s3tools/s3cmd root@ceph-mgr2:/etc/ceph# vi ceph.conf

 

 root@ceph-mgr2:/etc/ceph# systemctl restart  [email protected]

root@ceph-mgr1:/etc/ceph# vi ceph.conf

 root@ceph-mgr1:/etc/ceph# systemctl restart  [email protected]

 

创建 RGW 账户: 因为s3cmd是客户端工具需要key 创建用户会生成key cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin user create --uid=”user1” --display-name="user1"   

"access_key": "RPYI716AKE8KPAPGYCZP",
"secret_key": "fzueWA7SgyZZY70bAia6VjEfkPIXrMPDDBNRVSsR"

 

安装 s3cmd 客户端: cephadmin@ceph-deploy:~/ceph-cluster$ apt-cache madison s3cmd cephadmin@ceph-deploy:~/ceph-cluster$ sudo apt install s3cmd cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd --help 配置 s3cmd 客户端执行环境: s3cmd 客户端添加域名解析: cephadmin@ceph-deploy:~/ceph-cluster$ sudo vim /etc/hosts 10.4.7.138 rgw.awen.com  解析在rgw服务器 交互式的方式生成文件放在家目录 root@ceph-deploy:~# s3cmd --configure

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: RPYI716AKE8KPAPGYCZP
Secret Key: fzueWA7SgyZZY70bAia6VjEfkPIXrMPDDBNRVSsR
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.awen.com:9900

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rgw.awen.com:9900/%(bucket)

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
Access Key: RPYI716AKE8KPAPGYCZP
Secret Key: fzueWA7SgyZZY70bAia6VjEfkPIXrMPDDBNRVSsR
Default Region: US
S3 Endpoint: rgw.awen.com:9900
DNS-style bucket+hostname:port template for accessing a bucket: rgw.awen.com:9900/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/home/cephadmin/.s3cfg'

Configuration saved to '/home/cephadmin/.s3cfg'

    验证认证文件:

cephadmin@ceph-deploy:~/ceph-cluster$ cat /home/cephadmin/.s3cfg
[default]
access_key = RPYI716AKE8KPAPGYCZP
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = rgw.awen.com:9900
host_bucket = rgw.awen.com:9900/%(bucket)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = fzueWA7SgyZZY70bAia6VjEfkPIXrMPDDBNRVSsR
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html

命令行客户端 s3cmd 验证数据上传: 创建BUCKET cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd mb s3://magedu cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd mb s3://css cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd la 上传数据

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd put 2609178_linux.jpg s3://images
upload: '2609178_linux.jpg' -> 's3://images/2609178_linux.jpg' [1 of 1]
23839 of 23839 100% in 0s 1306.78 kB/s done

验证数据

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd ls s3://css
2023-01-08 04:27 23839 s3://css/2609178_linux.jpg

验证下载文件

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd get s3://images/awen/2609178_linux.jpg /tmp/
download: 's3://images/awen/2609178_linux.jpg' -> '/tmp/2609178_linux.jpg' [1 of 1]
23839 of 23839 100% in 0s 2.00 MB/s done

删除文件

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd rm s3://images/awen/1.jdg
delete: 's3://images/awen/1.jdg'
cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd ls s3://images/awen/
2023-01-08 05:05 23839 s3://images/awen/2609178_linux.jpg

 

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd rb s3://images/
Bucket 's3://images/' removed

 

基于Nginx+RGW的动静分离及短视频案例

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd info  s3://magedu

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd mb s3://video
Bucket 's3://video/' created
cephadmin@ceph-deploy:~/ceph-cluster$ vi video-bucket_policy

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": [
      "arn:aws:s3:::mybucket/*"
    ]
  }]
}

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd setpolicy video-bucket_policy s3://mybucket
s3://mybucket/: Policy updated

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd put 1.jpg s3://mybucket/
upload: '1.jpg' -> 's3://mybucket/1.jpg' [1 of 1]
23839 of 23839 100% in 0s 1055.36 kB/s done

cephadmin@ceph-deploy:~/ceph-cluster$ vi video-bucket_policy
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::video/*" ] }] }

 

 

应用授权 

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd setpolicy video-bucket_policy s3://video

 上传视频

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd put 1.mp4 s3://video
upload: '1.mp4' -> 's3://video/1.mp4' [part 1 of 2, 15MB] [1 of 1]
15728640 of 15728640 100% in 8s 1789.92 kB/s done
upload: '1.mp4' -> 's3://video/1.mp4' [part 2 of 2, 9MB] [1 of 1]
9913835 of 9913835 100% in 0s 24.97 MB/s done
cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd put 2.mp4 s3://video
upload: '2.mp4' -> 's3://video/2.mp4' [1 of 1]
2463765 of 2463765 100% in 0s 25.01 MB/s done
cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd put 3.mp4 s3://video
upload: '3.mp4' -> 's3://video/3.mp4' [1 of 1]
591204 of 591204 100% in 0s 10.75 MB/s done
cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd ls s3://video
2023-01-08 08:02 25642475 s3://video/1.mp4
2023-01-08 08:02 2463765 s3://video/2.mp4
2023-01-08 08:02 591204 s3://video/3.mp4

查看bucket信息

cephadmin@ceph-deploy:~/ceph-cluster$ s3cmd info s3://video

 cephadmin@ceph-deploy:~/ceph-cluster$ sudo chown -R cephadmin.cephadmin 1.mp4 2.mp4 3.mp4

 

配置反向代理

root@ceph-client-1:~# apt  install iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common  lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute  gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip

root@ceph-client-1:~# cd /usr/local/src/
root@ceph-client-1:/usr/local/src# wget https://nginx.org/download/nginx-1.22.0.tar.gz

root@ceph-client-1:/usr/local/src# tar -xzvf nginx-1.22.0.tar.gz

root@ceph-client-1:/usr/local/src# cd nginx-1.22.0/
root@ceph-client-1:/usr/local/src/nginx-1.22.0# mkdir -p /apps/nginx

root@ceph-client-1:/usr/local/src/nginx-1.22.0# ./configure --prefix=/apps/nginx \
> 9 --user=nginx \
> 10 --group=nginx \
> 11 --with-http_ssl_module \
> 12 --with-http_v2_module \
> 13 --with-http_realip_module \
> 14 --with-http_stub_status_module \
> 15 --with-http_gzip_static_module \
> 16 --with-pcre \
> 17 --with-stream \
> 18 --with-stream_ssl_module \
> 19 --with-stream_realip_module

root@ceph-client-1:/usr/local/src/nginx-1.22.0# make && make install

root@ceph-client-1:/apps/nginx/conf# cat nginx.conf

user  root;
worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

           upstream video {
           server 10.4.7.138:9900;
           server 10.4.7.137:9900;
          }

    server {
        listen       80;
        server_name  rgw.ygc.cn;

         proxy_buffering off;
         proxy_set_header Host $host;
         proxy_set_header X-Forwarded-For $remote_addr;

        location / {
            root   html;
            index  index.html index.htm;
        }

        location ~* \.(mp4|avi)$ {
            proxy_pass http://video;
        }
        
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}

root@ceph-client-1:/apps/nginx/conf# /apps/nginx/sbin/nginx -t
nginx: the configuration file /apps/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /apps/nginx/conf/nginx.conf test is successful
root@ceph-client-1:/apps/nginx/conf# /apps/nginx/sbin/nginx

 

 root@ceph-client:~# apt install tomcat9

root@ceph-client:~# cd /var/lib/tomcat9/webapps/ROOT

root@ceph-client:/var/lib/tomcat9/webapps/ROOT# mkdir app
root@ceph-client:/var/lib/tomcat9/webapps/ROOT# cd app/
root@ceph-client:/var/lib/tomcat9/webapps/ROOT/app# echo "<h1>Hello Java APP </h1>" >index.html

root@ceph-client:/var/lib/tomcat9/webapps/ROOT# mkdir app2

root@ceph-client:/var/lib/tomcat9/webapps/ROOT/app2# vi index.jsp
java app

root@ceph-client:/var/lib/tomcat9/webapps/ROOT/app2# systemctl restart tomcat9

 

 root@ceph-client-1:/apps/nginx/conf# vi nginx.conf

upstream tomcat {
server 10.4.7.132:8080;
}

location /app2 {
proxy_pass http://tomcat;
}

  启用 dashboard 插件: https://docs.ceph.com/en/mimic/mgr/ https://docs.ceph.com/en/latest/mgr/dashboard/ https://packages.debian.org/unstable/ceph-mgr-dashboard #15 版本有依赖需要单独解决 Ceph mgr 是一个多插件(模块化)的组件,其组件可以单独的启用或关闭,以下为在 ceph-deploy 服务器操作: 新版本需要安装 dashboard,而且必须安装在 mgr 节点,否则报错如下: The following packages have unmet dependencies: ceph-mgr-dashboard : Depends: ceph-mgr (= 15.2.13-1~bpo10+1) but it is not going to be installed E: Unable to correct problems, you have held broken packages.  

root@ceph-mgr1:~# apt-cache madison ceph-mgr-dashboard
root@ceph-mgr1:~# apt install ceph-mgr-dashboard

#查看帮助

root@ceph-mgr1:~# ceph mgr module -h

#列出所有模块 cephadmin@ceph-deploy:~/ceph-cluster$ ceph mgr module ls #启用模块 cephadmin@ceph-deploy:~/ceph-cluster$ ceph mgr module enable dashboard 模块启用后还不能直接访问,需要配置关闭 SSL 或启用 SSL 及指定监听地址。 启用 dashboard 模块: Ceph dashboard 在 mgr 节点进行开启设置,并且可以配置开启或者关闭 SSL,如下: #关闭 SSL cephadmin@ceph-deploy:~/ceph-cluster$ ceph config set mgr mgr/dashboard/ssl false #指定 dashboard 监听地址 cephadmin@ceph-deploy:~/ceph-cluster$ ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 10.4.7.137 #指定 dashboard 监听端口 cephadmin@ceph-deploy:~/ceph-cluster$ ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9009 需要等一会儿还没起来  这里重启了一下mgr服务 root@ceph-mgr1:~# systemctl restart [email protected] 第一次启用 dashboard 插件需要等一段时间(几分钟),再去被启用的节点验证。 如果有以下报错:Module 'dashboard' has failed: error('No socket could be created',) 需要检查 mgr 服务是否正常运行,可以重启一遍 mgr 服务   dashboard 访问验证:

cephadmin@ceph-deploy:~/ceph-cluster$ touch pass.txt
cephadmin@ceph-deploy:~/ceph-cluster$ echo "12345678" > pass.txt
cephadmin@ceph-deploy:~/ceph-cluster$ ceph dashboard set-login-credentials awen -i pass.txt

******************************************************************************
***                                  WARNING: this command is deprecated. ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************************

Username and password updated

dashboard SSL: 如果要使用 SSL 访问。则需要配置签名证书。证书可以使用 ceph 命令生成,或是 opessl命令生成。 https://docs.ceph.com/en/latest/mgr/dashboard/    ceph 自签名证书: #生成证书: [cephadmin@ceph-deploy ceph-cluster]$ ceph dashboard create-self-signed-cert #启用 SSL: [cephadmin@ceph-deploy ceph-cluster]$ ceph config set mgr mgr/dashboard/ssl true #查看当前 dashboard 状态: root@ceph-mgr1:~# systemctl restart ceph-mgr@ceph-mgr1 root@ceph-mgr2:~# systemctl restart ceph-mgr@ceph-mgr2

cephadmin@ceph-deploy:~/ceph-cluster$ ceph mgr services
{
          "dashboard": "https://10.4.7.137:8443/"
}

 

 

通过 prometheus 监控 ceph node 节点: https://prometheus.io/ 部署 prometheus:

root@ceph-mgr2:~# mkdir /apps && cd /apps
root@ceph-mgr2:/apps# cd /usr/local/src/

root@ceph-mgr2:/usr/local/src# tar -xzvf prometheus-server-2.40.5-onekey-install.tar.gz

root@ceph-mgr2:/usr/local/src# bash prometheus-install.sh

root@ceph-mgr2:/usr/local/src# systemctl status prometheus

访问prometheus

 

 

部署 node_exporter

个节点都要安装node_exporter

各 node 节点安装 node_exporter

root@ceph-node1:~# cd /usr/local/src
root@ceph-node1:/usr/local/src# tar xf node-exporter-1.5.0-onekey-install.tar.gz

root@ceph-node1:/usr/local/src# bash node-exporter-1.5.0-onekey-install.sh   root@ceph-mgr2:/apps/prometheus# vi prometheus.yml

- job_name: 'ceph-node-data'
  static_configs:
     - targets: ['10.4.7.139:9100','10.4.7.140:9100','10.4.7.141:9100','10.4.7.142:9100']

 

 

通过 prometheus 监控 ceph 服务: Ceph manager 内部的模块中包含了 prometheus 的监控模块,并监听在每个 manager 节点 的 9283 端口,该端口用于将采集到的信息通过 http 接口向 prometheus 提供数据。 https://docs.ceph.com/en/mimic/mgr/prometheus/?highlight=prometheus   启用 prometheus 监控模块: cephadmin@ceph-deploy:~/ceph-cluster$ ceph mgr module enable prometheus

 

 

root@ceph-client:~# vi /etc/haproxy/haproxy.cfg

listen ceph-prometheus-9283
  bind 10.4.7.111:9283
  mode tcp
  server rgw1 10.4.7.137:9283 check inter 2s fall 3 rise 3
  server rgw2 10.4.7.138:9283 check inter 2s fall 3 rise 3

root@ceph-client:~# systemctl restart haproxy.service

访问VIP

root@ceph-mgr2:/apps/prometheus# vi prometheus.yml

- job_name: 'ceph-cluster-data'
  static_configs:
     - targets: ['10.4.7.111:9283']

 root@ceph-mgr2:/apps/prometheus# systemctl restart prometheus.service

导入模板

ceph-cluster

https://grafana.com/grafana/dashboards/2842-ceph-cluster/

 

 

自定义ceph crush运行图实现基于HDD和SSD磁盘实现数据冷热数据分类存储

 cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd df

weight 表示设备(device)的容量相对值,比如 1TB 对应 1.00,那么 500G 的 OSD 的 weight 就应该是 0.5,weight 是基于磁盘空间分配 PG 的数量,让 crush 算法尽可能往磁盘空间大 的 OSD 多分配 PG.往磁盘空间小的 OSD 分配较少的 PG

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd crush reweight --help

osd crush reweight <name> <weight:float>

#修改某个指定 ID 的 osd 的权重,会触发数据的重新分配: 验证 OSD 权重:

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd crush reweight osd.3 0.0003
reweighted item id 3 name 'osd.3' to 0.0003 in crush map

 

业务低峰去调  涉及数据平衡操作

Reweight 参数的目的是重新平衡 ceph 的 CRUSH 算法随机分配的 PG,默认的分配是概率 上的均衡,即使 OSD 都是一样的磁盘空间也会产生一些 PG 分布不均匀的情况,此时可以 通过调整 reweight 参数,让 ceph 集群立即重新平衡当前磁盘的 PG,以达到数据均衡分布 的目的,REWEIGHT 是 PG 已经分配完成,要在 ceph 集群重新平衡 PG 的分布。   修改 REWEIGHT 并验证: OSD 的 REWEIGHT 的值默认为 1,值可以调整,范围在 0~1 之间,值越低 PG 越小,如果 调整了任何一个 OSD 的 REWEIGHT 值,那么 OSD 的 PG 会立即和其它 OSD 进行重新平 衡,即数据的重新分配,用于当某个 OSD 的 PG 相对较多需要降低其 PG 数量的场景。

cephadmin@ceph-deploy:~/ceph-cluster$ ceph osd reweight 2 0.9
reweighted osd.2 to 0.9 (e666)

pg数量减小

 

 

 以上是动态调整osd种pg的分布,调整数据的平衡

crush运行图是放在mon的 所以要用工具把mon的数据(二进制)导出来   

把二进制文件转成文本   

再去通过vim调整 -自定义crush规则

把文本文件转换成二进制文件 

把二进制文件导入mon 

导出 crush 运行图: 注:导出的 crush 运行图为二进制格式,无法通过文本编辑器直接打开,需要使用 crushtool 工具转换为文本格式后才能通过 vim 等文本编辑宫工具打开和编辑。 cephadmin@ceph-deploy:~$ mkdir data 导出数据 为二进制文件 cephadmin@ceph-deploy:~/data$ ceph osd getcrushmap -o ./crushmap-v1

cephadmin@ceph-deploy:~/data$ file crushmap-v1
crushmap-v1: GLS_BINARY_LSB_FIRST

 

将运行图转换为文本: 导出的运行图不能直接编辑,需要转换为文本格式再进行查看与编辑 安装crushtool cephadmin@ceph-deploy:~/data$ sudo apt install ceph-base cephadmin@ceph-deploy:~/data$ crushtool -d crushmap-v1 

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

 

 

 # end crush map

导出文件

cephadmin@ceph-deploy:~/data$ crushtool -d crushmap-v1 -o crushmap-v1.txt

擦除ssd磁盘

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node1
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node2
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node3
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node4
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node1 /dev/nvme0n1
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node2 /dev/nvme0n1
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node3 /dev/nvme0n1
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy disk zap ceph-node4 /dev/nvme0n1

添加固态硬盘 osd

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf osd create ceph-node1  --data /dev/nvme0n1

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf osd create ceph-node2  --data /dev/nvme0n1

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf osd create ceph-node3  --data /dev/nvme0n1

cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf osd create ceph-node4  --data /dev/nvme0n1

创建存储池

cephadmin@ceph-deploy:~/data$ ceph osd pool create testpool 32 32

cephadmin@ceph-deploy:~/data$ ceph pg ls-by-pool testpool | awk '{print $1,$2,$15}'

修改配置

 cephadmin@ceph-deploy:~/data$ vi crushmap-v1.txt

 

将运行图转换为文本: 导出的运行图不能直接编辑,需要转换为文本格式再进行查看与编辑 将文本转换为 crush 格式: cephadmin@ceph-deploy:~/data$ crushtool -c crushmap-v1.txt -o crushmap-v2 查看二进制文件 cephadmin@ceph-deploy:~/data$ crushtool -d crushmap-v2 导入新的 crush: 导入的运行图会立即覆盖原有的运行图并立即生效。 cephadmin@ceph-deploy:~/data$ ceph osd setcrushmap -i ./crushmap-v2 验证 crush 运行图是否生效: cephadmin@ceph-deploy:~/data$ ceph osd crush rule dump

 

导出 cursh 运行图: cephadmin@ceph-deploy:~/data$ ceph osd getcrushmap -o ./crushmap-v3 导成文本文件 cephadmin@ceph-deploy:~/data$ crushtool -d crushmap-v3 > crushmap-v3.txt

 

cephadmin@ceph-deploy:~/data$ cat crushmap-v1.txt # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54
# devices device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 3 osd.3 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class hdd device 7 osd.7 class hdd device 8 osd.8 class ssd device 9 osd.9 class ssd device 10 osd.10 class ssd device 11 osd.11 class ssd
# types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 zone type 10 region type 11 root
# buckets host ceph-node1 {         id -3           # do not change unnecessarily         id -4 class hdd         # do not change unnecessarily         id -11 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.0 weight 0.030         item osd.4 weight 0.010         item osd.8 weight 0.005 } host ceph-node2 {         id -5           # do not change unnecessarily         id -6 class hdd         # do not change unnecessarily         id -12 class ssd                # do not change unnecessarily         # weight 0.054         alg straw2         hash 0  # rjenkins1         item osd.1 weight 0.030         item osd.5 weight 0.019         item osd.9 weight 0.005 } host ceph-node3 {         id -7           # do not change unnecessarily         id -8 class hdd         # do not change unnecessarily         id -13 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.2 weight 0.030         item osd.6 weight 0.010         item osd.10 weight 0.005 } host ceph-node4 {         id -9           # do not change unnecessarily         id -10 class hdd                # do not change unnecessarily         id -14 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.3 weight 0.030         item osd.7 weight 0.010         item osd.11 weight 0.005 } root default {         id -1           # do not change unnecessarily         id -2 class hdd         # do not change unnecessarily         id -15 class ssd                # do not change unnecessarily         # weight 0.189         alg straw2         hash 0  # rjenkins1         item ceph-node1 weight 0.045         item ceph-node2 weight 0.054         item ceph-node3 weight 0.045         item ceph-node4 weight 0.045 }
#my ceph node host ceph-hddnode1 {         id -103           # do not change unnecessarily         id -104 class hdd         # do not change unnecessarily         id -110 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.0 weight 0.030         item osd.4 weight 0.010         item osd.8 weight 0.005   #这个需要去掉ssd }
host ceph-hddnode2 {         id -105           # do not change unnecessarily         id -106 class hdd         # do not change unnecessarily         id -120 class ssd                # do not change unnecessarily         # weight 0.054         alg straw2         hash 0  # rjenkins1         item osd.1 weight 0.030         item osd.5 weight 0.019         item osd.9 weight 0.005   #这个需要去掉ssd }
host ceph-hddnode3 {         id -107           # do not change unnecessarily         id -108 class hdd         # do not change unnecessarily         id -130 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.2 weight 0.030         item osd.6 weight 0.010         item osd.10 weight 0.005    #这个需要去掉ssd } host ceph-hddnode4 {         id -109           # do not change unnecessarily         id -110 class hdd                # do not change unnecessarily         id -140 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.3 weight 0.030         item osd.7 weight 0.010         item osd.11 weight 0.005    #这个需要去掉ssd }
#my  ssd node host ceph-ssdnode1 {         id -203           # do not change unnecessarily         id -204 class hdd         # do not change unnecessarily         id -205 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.0 weight 0.030    #这个需要去掉hdd         item osd.4 weight 0.010    #这个需要去掉hdd         item osd.8 weight 0.005 }
host ceph-ssdnode2 {         id -206           # do not change unnecessarily         id -207 class hdd         # do not change unnecessarily         id -208 class ssd                # do not change unnecessarily         # weight 0.054         alg straw2         hash 0  # rjenkins1         item osd.1 weight 0.030   #这个需要去掉hdd         item osd.5 weight 0.019   #这个需要去掉hdd         item osd.9 weight 0.005 } host ceph-ssdnode3 {         id -209           # do not change unnecessarily         id -210 class hdd         # do not change unnecessarily         id -211 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.2 weight 0.030    #这个需要去掉hdd         item osd.6 weight 0.010    #这个需要去掉hdd         item osd.10 weight 0.005 } host ceph-ssdnode4 {         id -212           # do not change unnecessarily         id -213 class hdd                # do not change unnecessarily         id -214 class ssd                # do not change unnecessarily         # weight 0.045         alg straw2         hash 0  # rjenkins1         item osd.3 weight 0.030    #这个需要去掉hdd         item osd.7 weight 0.010    #这个需要去掉hdd         item osd.11 weight 0.005 }
#my hdd bucket root hdd {         id -215           # do not change unnecessarily         id -216 class hdd         # do not change unnecessarily         id -217 class ssd                # do not change unnecessarily         # weight 0.189         alg straw2         hash 0  # rjenkins1         item ceph-hddnode1 weight 0.045         item ceph-hddnode2 weight 0.054         item ceph-hddnode3 weight 0.045         item ceph-hddnode4 weight 0.045 } #my ssd bucket root ssd {         id -218           # do not change unnecessarily         id -219 class hdd         # do not change unnecessarily         id -220 class ssd                # do not change unnecessarily         # weight 0.189         alg straw2         hash 0  # rjenkins1         item ceph-ssdnode1 weight 0.045         item ceph-ssdnode2 weight 0.054         item ceph-ssdnode3 weight 0.045         item ceph-ssdnode4 weight 0.045 }
#my hdd rules rule my_hdd_rule {         id 88         type replicated         min_size 1         max_size 12         step take hdd         step chooseleaf firstn 0 type host         step emit }
# rules rule my_ssd_rule {         id 89         type replicated         min_size 1         max_size 12         step take ssd         step chooseleaf firstn 0 type host         step emit }
# rules rule replicated_rule {         id 0         type replicated         min_size 1         max_size 12         step take default         step chooseleaf firstn 0 type host         step emit } rule erasure-code {         id 1         type erasure         min_size 3         max_size 4         step set_chooseleaf_tries 5         step set_choose_tries 100         step take default         step chooseleaf indep 0 type host         step emit }
# end crush map     cephadmin@ceph-deploy:~/data$ crushtool -c crushmap-v1.txt -o crushmap-v2 cephadmin@ceph-deploy:~/data$ crushtool -d crushmap-v2 导入规则

cephadmin@ceph-deploy:~/data$ ceph osd setcrushmap -i crushmap-v2
81

查看规则

cephadmin@ceph-deploy:~/data$ ceph osd crush dump

 

创建存储池

cephadmin@ceph-deploy:~/data$ ceph osd pool create my-hddpool 32 32 my_hdd_rule

验证pg 8,9,10,11 已经不在hdd  规则种没有机械盘

cephadmin@ceph-deploy:~/data$ ceph pg ls-by-pool my-hddpool | awk '{print $1,$2,$15}'
PG OBJECTS ACTING
18.0 0 [0,3,1]p0
18.1 0 [0,1,7]p0
18.2 0 [0,7,5]p0
18.3 0 [0,7,5]p0
18.4 0 [0,1,3]p0
18.5 0 [1,0,3]p1
18.6 0 [1,2,3]p1
18.7 0 [2,5,3]p2
18.8 0 [3,2,0]p3
18.9 0 [1,2,0]p1
18.a 0 [0,6,3]p0
18.b 0 [0,1,3]p0
18.c 0 [2,0,3]p2
18.d 0 [1,2,3]p1
18.e 0 [5,4,3]p5
18.f 0 [1,3,0]p1
18.10 0 [1,4,6]p1
18.11 0 [2,3,4]p2
18.12 0 [1,3,4]p1
18.13 0 [1,0,2]p1
18.14 0 [7,5,2]p7
18.15 0 [0,5,3]p0
18.16 0 [6,4,5]p6
18.17 0 [3,2,5]p3
18.18 0 [2,1,7]p2
18.19 0 [1,2,4]p1
18.1a 0 [3,4,1]p3
18.1b 0 [2,1,3]p2
18.1c 0 [4,3,2]p4
18.1d 0 [5,3,2]p5
18.1e 0 [5,3,0]p5
18.1f 0 [0,3,1]p0

* NOTE: afterwards

标签:weight,deploy,ceph,cluster,cephadmin,root
From: https://www.cnblogs.com/tshxawen/p/17026134.html

相关文章

  • 【云原生】Ceph 在 k8s中应用
    目录一、概述二、CephRook介绍三、通过Rook在k8s中部署Ceph1)下载部署包2)部署RookOperator3)创建RookCeph集群4)部署RookCeph工具5)部署CephDashboard6)检查6)通过ceph......
  • ceph纠删码池
    1.查看默认配置k:这是分散到各个OSD的数据块的数据。默认值为2m:这是在数据变得不可用之前可能出现故障的OSD数据,默认值为2directory:此可选参数是插件库的位置。默认值为......
  • ceph 集群维护 存储池管理 用户认证
    添加主机 添加磁盘  删除磁盘ceph集群维护:http://docs.ceph.org.cn/rados/#ceph集群配置、部署与运维通过套接字进行单机管理每个节点上,每个osd都会生成 soc......
  • 分布式存储(ceph)技能图谱(持续更新)
    一下为个人结合其他人对分布式存储所需的技能进行总结,绘制成如下图谱,方便针对性学习。这里对分布式存储系统接触较多的是ceph,所以在分布式存储系统分支上偏向ceph的学习......
  • 分布式存储系统 Ceph 实战操作
    目录一、概述二、cephadm工具的使用1)cephadm工具的介绍2)cephadm安装3)cephadm常用命令使用4)启用cephshell三、ceph命令使用1)添加新节点2)使用ceph安装软件3)主机操作......
  • 分布式存储系统 Ceph 介绍与环境部署
    目录一、概述二、Ceph架构三、Ceph核心组件介绍四、Ceph三种存储类型1)块存储服务(RBD)2)文件系统存储服务(CephFS)3)对象存储服务(RGW)五、Ceph版本发行生命周期六、Ceph......
  • 记一次kubernetes测试环境搭建(heapster,helm,nginx-ingress-controller,glusterfs heketi
    课程内容:各种k8s部署方式。包括minikube部署,kubeadm部署,kubeasz部署,rancher部署,k3s部署。包括开发测试环境部署k8s,和生产环境部署k8s。详细介绍helm命令,学习helmchart语法,......
  • ceph
    */5****/usr/sbin/netdatetime.aliyun.com&>/dev/null&&hwclock-w&>/dev/nullaptinstall-yapt-transport-httpsca-certificatescurlsoftware-properti......
  • ceph分布式存储安装(ceph-deploy)
    ceph学习ceph简介和特性ceph时一个多版本存储系统,它把每一个待管理的数据量切分为一到多个固定带下的对象数据,并以其为原子单元完成数据存取。对象数据的地处存储服务......
  • ceph集群部署
    ceph01 20.168.59.11ceph02 20.168.59.12ceph03 20.168.59.13在ceph01 节点上执行设置主机名# hostnamectl set-hostname ceph01 修改网卡 ens33 的......