首页 > 数据库 >MongoDB 7.0集群部署

MongoDB 7.0集群部署

时间:2024-03-07 22:35:35浏览次数:21  
标签:07T13 03 mongodb Long 2024 7.0 集群 MongoDB ISODate

环境描述:

OS:openEuler 22.03 LTS-SP3

mongoDB:7.0.6

mongodb-database-tools:100.9.0

mongosh:2.1.5

GCC:12.3.1

Python:3.9.9

Clang:12.0.1

服务器规划:

主机名 IP地址 Mongos Server组件端口 Config Server组件端口 Shard Server组件端口
mongo-01 192.168.83.10 27017 27018 主节点:27019
副本节点:27020
仲裁节点:27021
mongo-02 192.168.83.11 27017 27018 仲裁节点:27019
主节点:27020
副本节点:27021
mongo-03 192.168.83.12 27017 27018 副本节点:27019
仲裁节点:27020
主节点:27021

逻辑示意图:

各组件功能说明:

  • Mongos Server
    数据库集群请求的入口,所有请求都通过mongos server进行协调,不需在应用程序添加路由选择器,mongos server自己就是一个请求分发中心,它负责把数据请求请求转发到对应的shard服务器上。
    在生产环境通常需要设置多mongos server作为请求入口,防止其中一个挂掉所有的mongodb请求都没法操作。
  • Config Server
    配置服务器,存储所有数据库元信息(路由、分片)的配置。
    mongos server本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。
    mongos server第一次启动或者关掉重启就会从config server加载配置信息,以后如果配置服务器信息变化会通知到所有的mongos server更新自己的状态,这样mongos server就能继续准确路由。
    在生产环境中通常需要设置多个config server,因为它存储了分片路由的元数据,防止单点数据丢失。
  • Shard Server
    分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。
    将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。
    基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。
  • 主节点(Primary)
    在复制集中,最多只能拥有一个主节点,主节点是唯一能够接收写请求的节点。
    MongoDB在主节点进行写操作,并将这些操作记录到主节点的oplog中。
    而副节点将会从oplog复制到其本机,并将这些操作应用到自己的数据集上。
  • 副本节点(Secondary)
    副本节点通过应用主节点传来的数据变动操作来保持其数据集与主节点一致。
    副本节点也可以通过增加额外参数配置来对应特殊需求。
    例如,副本节点可以是non-voting或是priority 0
  • 仲裁节点(Arbiter)
    仲裁节点即投票节点,其本身并不包含数据集,且也无法晋升为主节点。
    但是,一旦当前的主节点不可用时,投票节点就会参与到新的主节点选举的投票中。
    仲裁节点使用最小的资源并且不要求单独的硬件设备。
    投票节点的存在使得复制集可以以偶数个节点存在,而无需为复制集再新增节点。
    在生产环境中不要将投票节点运行在复制集的主节点或副本节点机器上。
    投票节点与其他复制集节点的交流仅有:选举过程中的投票,心跳检测和配置数据,这些交互都是不加密的。

一、各MongoDB节点基础配置(在MongoDB所有节点执行)

1. 配置hosts

192.168.83.10   mongo-01
192.168.83.11   mongo-02
192.168.83.12   mongo-03

2. 更新系统包并安装依赖包

yum -y update
yum -y install net-tools lrzsz nmap tree bash-completion tar chrony libcurl-devel

3. 设置ulimit进程资源限制参数

cat >> /etc/security/limits.conf << EOF
mongodb soft nproc 65535
mongodb hard nproc 65535
mongodb soft nofile 81920
mongodb hard nofile 81920
EOF

4. 升级GCC

4.1 升级到gcc-12.3.1

yum install -y gcc-toolset-12-gcc* clang libcurl-devel

4.2 设置环境变量

cat > /etc/profile.d/gcc.sh << EOF
export PATH=/opt/openEuler/gcc-toolset-12/root/usr/bin/:$PATH
export LD_LIBRARY_PATH=/opt/openEuler/gcc-toolset-12/root/usr/lib64/:$LD_LIBRARY_PATH
EOF

source /etc/profile.d/gcc.sh

# gcc --version
gcc (GCC) 12.3.1 20230508 (openEuler 12.3.1-16.oe2203sp3)
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

5. 新增mongodb用户

groupadd mongodb
useradd -g mongodb -G mongodb mongodb -M

设置mongodb用户密码为123456
echo "123456" | passwd mongodb --stdin

6.  设置sudoers

vi /etc/sudoers
在107行后添加
%mongodb  ALL=(ALL)  NOPASSWD: ALL

7. 创建MongoDB工作目录

mkdir -p /opt/mongo7.0/{logs,yaml,data,pki,bin,pidfile} 
mkdir /opt/mongo7.0/data/{config_svr,shard-1,shard-2,shard-3}
mkdir /opt/mongo7.0/logs/{mongos,config_svr,shard-1,shard-2,shard-3}

# tree /opt/mongo7.0/
/opt/mongo7.0/
├── bin
├── data
│   ├── config_svr
│   ├── shard-1
│   ├── shard-2
│   └── shard-3
├── logs
│   ├── config_svr
│   ├── mongos
│   ├── shard-1
│   ├── shard-2
│   └── shard-3
├── pki
├── pidfile
└── yaml

8. 释放所有MongoDB二进制可执行文件

先在mongo-01上执行
tar -xf mongodb-linux-x86_64-rhel80-7.0.6.tgz
cp mongodb-linux-*/bin/* /opt/mongo7.0/bin/

tar -xf mongodb-database-tools-rhel80-x86_64-100.9.0.tgz
cp mongodb-database-tools-*/bin/* /opt/mongo7.0/bin/

tar -xf mongosh-2.1.5-linux-x64.tgz
cp mongosh-*/bin/* /opt/mongo7.0/bin/

同步到其他mongo节点
scp /opt/mongo7.0/bin/* root@mongo-02:/opt/mongo7.0/bin/
scp /opt/mongo7.0/bin/* root@mongo-03:/opt/mongo7.0/bin/

9.  添加MongoDB环境变量

vi /etc/profile.d/mongodb.sh
export MONGODB_HOME=/opt/mongo7.0
export PATH=$MONGODB_HOME/bin:$PATH

source /etc/profile

# mongod --version
db version v7.0.6
Build Info: {
    "version": "7.0.6",
    "gitVersion": "66cdc1f28172cb33ff68263050d73d4ade73b9a4",
    "openSSLVersion": "OpenSSL 1.1.1wa  16 Nov 2023",
    "modules": [],
    "allocator": "tcmalloc",
    "environment": {
        "distmod": "rhel80",
        "distarch": "x86_64",
        "target_arch": "x86_64"
    }
}

二、生成mongo-keyfile认证文件 

1. 在mongo-01节点上执行

openssl rand -base64 756 > ${MONGODB_HOME}/pki/mongo.keyfile

2. 将mongo.keyfile拷贝至其他mongodb节点

scp ${MONGODB_HOME}/pki/mongo.keyfile root@mongo-02:/${MONGODB_HOME}/pki
scp ${MONGODB_HOME}/pki/mongo.keyfile root@mongo-03:/${MONGODB_HOME}/pki

3. 给mongodb各文件和目录赋权

chown -R mongodb:mongodb ${MONGODB_HOME}
chmod 400 ${MONGODB_HOME}/pki/mongo.keyfile

ssh root@mongo-02 "chown -R mongodb:mongodb ${MONGODB_HOME}"
ssh root@mongo-03 "chown -R mongodb:mongodb ${MONGODB_HOME}"

ssh root@mongo-02 "chown mongodb:mongodb /var/run/mongodb"
ssh root@mongo-03 "chown mongodb:mongodb /var/run/mongodb"

ssh root@mongo-02 "chmod 400 ${MONGODB_HOME}/pki/mongo.keyfile"
ssh root@mongo-03 "chmod 400 ${MONGODB_HOME}/pki/mongo.keyfile"

三、创建Config Server组件集群

1. 在mongo-01节点创建config_svr.yml

cat > ${MONGODB_HOME}/yaml/config_svr.yml << EOF
processManagement:
  fork: true
  pidFilePath: ${MONGODB_HOME}/pidfile/config_svr.pid
net:
  bindIpAll: true
  port: 27018
  ipv6: true
  maxIncomingConnections: 20000
storage:
  dbPath: ${MONGODB_HOME}/data/config_svr
  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
systemLog:
  destination: file
  path: ${MONGODB_HOME}/logs/config_svr/config_svr.log
  logAppend: true
sharding:
  clusterRole: configsvr
replication:
  oplogSizeMB: 1000
  replSetName: configsvr_rs
security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile
setParameter:
  connPoolMaxConnsPerHost: 20000
EOF

注:初始配置时先使用#注释掉这四行,待完成整个集群配置后再开启

security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile

2. 将config_srv.yml拷贝至其他mogodb节点并赋权

scp ${MONGODB_HOME}/yaml/config_svr.yml root@mongo-02:${MONGODB_HOME}/yaml/
scp ${MONGODB_HOME}/yaml/config_svr.yml root@mongo-03:${MONGODB_HOME}/yaml/

ssh root@mongo-02 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/config_svr.yml"
ssh root@mongo-03 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/config_svr.yml"

3. 1 创建Config Server启动文件

cat > /usr/lib/systemd/system/mongodb-config.service << EOF
[Unit]
Description=MongoDB Config Server
After=network.target

[Service]
Type=forking
User=mongodb
Group=mongodb
PIDFile=/opt/mongo7.0/pidfile/config_svr.pid
ExecStart=/opt/mongo7.0/bin/mongod --config /opt/mongo7.0/yaml/config_svr.yml
ExecStop=/opt/mongo7.0/bin/mongod --shutdown --config /opt/mongo7.0/yaml/config_svr.yml

[Install]
WantedBy=multi-user.target
EOF

3.2  Config Server启动文件拷贝到其他Config Server节点

scp /usr/lib/systemd/system/mongodb-config-server.service root@mongo-02:/usr/lib/systemd/system
scp /usr/lib/systemd/system/mongodb-config-server.service root@mongo-03:/usr/lib/systemd/system

4.  在Config Server各节点启动Config Server服务

systemctl daemon-reload
systemctl enable mongodb-config-server.service

su - mongodb
sudo systemctl start mongodb-config-server.service
sudo systemctl status mongodb-config-server.service

$ sudo ss -lntp
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process                                      
LISTEN           0                128                              0.0.0.0:27018                          0.0.0.0:*              users:(("mongod",pid=2983,fd=15))           
LISTEN           0                128                              0.0.0.0:22                             0.0.0.0:*              users:(("sshd",pid=813,fd=3))               
LISTEN           0                128                                 [::]:27018                             [::]:*              users:(("mongod",pid=2983,fd=16))           
LISTEN           0                128                                 [::]:22                                [::]:*              users:(("sshd",pid=813,fd=4))      

5. 在mongo-01上初始化配置Config Server副本集

su - mongodb
mongosh --host localhost --port 27018

use admin
定义config变量:
config = {_id: "configsvr_rs", members: [
  {_id: 0, host: "192.168.83.10:27018"},
  {_id: 1, host: "192.168.83.11:27018"},
  {_id: 2, host: "192.168.83.12:27018"} ]
}

注:_id: "configsvr_rs"应与config_svr.yml配置文件中的replSetName名称一致,"members" 中的 "host" 为三个节点的IP/主机名和Config Server Port

初始化副本集:
admin> rs.initiate(config)
{ ok: 1 }

查看此时状态:
configsvr_rs [direct: other] admin> rs.status()
{
  set: 'configsvr_rs',
  date: ISODate('2024-03-07T13:08:17.558Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  configsvr: true,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1709816896, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-07T13:08:16.957Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1709816896, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1709816896, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1709816896, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-07T13:08:16.957Z'),
    lastDurableWallTime: ISODate('2024-03-07T13:08:16.957Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1709816861, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-07T13:07:51.779Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1709816861, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1709816861, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-03-07T13:07:51.824Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-07T13:07:52.336Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.83.10:27018',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 123,
      optime: { ts: Timestamp({ t: 1709816896, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:08:16.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1709816871, i: 1 }),
      electionDate: ISODate('2024-03-07T13:07:51.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.83.11:27018',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 36,
      optime: { ts: Timestamp({ t: 1709816894, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1709816894, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:08:14.000Z'),
      optimeDurableDate: ISODate('2024-03-07T13:08:14.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      lastHeartbeat: ISODate('2024-03-07T13:08:15.845Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:08:16.844Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.83.10:27018',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.83.12:27018',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 36,
      optime: { ts: Timestamp({ t: 1709816894, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1709816894, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:08:14.000Z'),
      optimeDurableDate: ISODate('2024-03-07T13:08:14.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:08:16.957Z'),
      lastHeartbeat: ISODate('2024-03-07T13:08:15.842Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:08:16.845Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.83.10:27018',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709816896, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709816896, i: 1 })
}

 四、创建Shard Server组件集群

1.  在mongo-01节点创建Shard Server01/2/3三个副本集配置文件

1.1 创建Shard Server 01副本集

cat > ${MONGODB_HOME}/yaml/shard-1.yml << EOF
processManagement:
  fork: true
  pidFilePath: ${MONGODB_HOME}/pidfile/shard-1.pid
net:
  bindIpAll: true
  port: 27019
  ipv6: true
  maxIncomingConnections: 20000
storage:
  dbPath: ${MONGODB_HOME}/data/shard-1
  wiredTiger:
    engineConfig:
      cacheSizeGB: 5
systemLog:
  destination: file
  path: ${MONGODB_HOME}/logs/shard-1/shard-1.log
  logAppend: true
sharding:
  clusterRole: shardsvr
replication:
  oplogSizeMB: 1000
  replSetName: shardsvr_rs1
security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile
setParameter:
  connPoolMaxConnsPerHost: 20000
  maxNumActiveUserIndexBuilds: 6
EOF

注:初始配置时先使用#注释掉这四行,待完成整个集群配置后再开启

security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile

1.2 创建Shard Server 02副本集

cat > ${MONGODB_HOME}/yaml/shard-2.yml << EOF
processManagement:
  fork: true
  pidFilePath: ${MONGODB_HOME}/pidfile/shard-2.pid
net:
  bindIpAll: true
  port: 27020
  ipv6: true
  maxIncomingConnections: 20000
storage:
  dbPath: ${MONGODB_HOME}/data/shard-2
  wiredTiger:
    engineConfig:
      cacheSizeGB: 5
systemLog:
  destination: file
  path: ${MONGODB_HOME}/logs/shard-2/shard-2.log
  logAppend: true
sharding:
  clusterRole: shardsvr
replication:
  oplogSizeMB: 1000
  replSetName: shardsvr_rs2
security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile
setParameter:
  connPoolMaxConnsPerHost: 20000
  maxNumActiveUserIndexBuilds: 6
EOF

注:初始配置时先使用#注释掉这四行,待完成整个集群配置后再开启

security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile

1.3 创建Shard Server 03副本集

cat > ${MONGODB_HOME}/yaml/shard-3.yml << EOF
processManagement:
  fork: true
  pidFilePath: ${MONGODB_HOME}/pidfile/shard-3.pid
net:
  bindIpAll: true
  port: 27021
  ipv6: true
  maxIncomingConnections: 20000
storage:
  dbPath: ${MONGODB_HOME}/data/shard-3
  wiredTiger:
    engineConfig:
      cacheSizeGB: 5
systemLog:
  destination: file
  path: ${MONGODB_HOME}/logs/shard-3/shard-3.log
  logAppend: true
sharding:
  clusterRole: shardsvr
replication:
  oplogSizeMB: 1000
  replSetName: shardsvr_rs3
security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile
setParameter:
  connPoolMaxConnsPerHost: 20000
  maxNumActiveUserIndexBuilds: 6
EOF

注:初始配置时先使用#注释掉这四行,待完成整个集群配置后再开启

security:
  authorization: enabled
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile

2. 将三个副本集配置文件拷贝至其他Shard Server节点并赋权

scp ${MONGODB_HOME}/yaml/shard-*.yml root@mongo-02:${MONGODB_HOME}/yaml
scp ${MONGODB_HOME}/yaml/shard-*.yml root@mongo-03:${MONGODB_HOME}/yaml

ssh root@mongo-02 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/shard-*.yml"
ssh root@mongo-03 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/shard-*.yml"

 3. 在mongo-01节点创建Shard Server01/2/3三个副本集启动文件

3.1 创建Shard Server 01副本集启动文件

cat > /usr/lib/systemd/system/mongodb-shard-1.service << EOF
[Unit]
Description=MongoDB Shard Server One
After=mongodb-config.service network.target

[Service]
Type=forking
User=mongodb
Group=mongodb
PIDFile=/opt/mongo7.0/pidfile/shard-1.pid
ExecStart=/opt/mongo7.0/bin/mongod --config /opt/mongo7.0/yaml/shard-1.yml
ExecStop=/opt/mongo7.0/bin/mongod --shutdown --config /opt/mongo7.0/yaml/shard-1.yml

[Install]
WantedBy=multi-user.target
EOF

3.2 创建Shard Server 02副本集启动文件

cat > /usr/lib/systemd/system/mongodb-shard-2.service << EOF
[Unit]
Description=MongoDB Shard Server Two
After=mongodb-config.service network.target

[Service]
Type=forking
User=mongodb
Group=mongodb
PIDFile=/opt/mongo7.0/pidfile/shard-2.pid
ExecStart=/opt/mongo7.0/bin/mongod --config /opt/mongo7.0/yaml/shard-2.yml
ExecStop=/opt/mongo7.0/bin/mongod --shutdown --config /opt/mongo7.0/yaml/shard-2.yml

[Install]
WantedBy=multi-user.target
EOF

 3.3 创建Shard Server 03副本集启动文件

cat > /usr/lib/systemd/system/mongodb-shard-3.service << EOF
[Unit]
Description=MongoDB Shard Server Three
After=mongodb-config.service network.target

[Service]
Type=forking
User=mongodb
Group=mongodb
PIDFile=/opt/mongo7.0/pidfile/shard-3.pid
ExecStart=/opt/mongo7.0/bin/mongod --config /opt/mongo7.0/yaml/shard-3.yml
ExecStop=/opt/mongo7.0/bin/mongod --shutdown --config /opt/mongo7.0/yaml/shard-3.yml

[Install]
WantedBy=multi-user.target
EOF

3.4 将三个Shard Server副本集的启动文件拷贝至其他Shard Server节点

scp /usr/lib/systemd/system/mongodb-shard-*.service root@mongo-02:/usr/lib/systemd/system
scp /usr/lib/systemd/system/mongodb-shard-*.service root@mongo-03:/usr/lib/systemd/system

4.  在mongodb各节点启动三个Shard Server服务

systemctl daemon-reload
systemctl enable mongodb-shard-1
systemctl enable mongodb-shard-2
systemctl enable mongodb-shard-3

su - mongodb
sudo systemctl start mongodb-shard-1
sudo systemctl start mongodb-shard-2
sudo systemctl start mongodb-shard-3
sudo systemctl status mongodb-shard-1
sudo systemctl status mongodb-shard-2
sudo systemctl status mongodb-shard-3

sudo ss -lntp
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process                                      
LISTEN           0                128                              0.0.0.0:27018                          0.0.0.0:*              users:(("mongod",pid=905,fd=15))            
LISTEN           0                128                              0.0.0.0:27019                          0.0.0.0:*              users:(("mongod",pid=1446,fd=15))           
LISTEN           0                128                              0.0.0.0:27020                          0.0.0.0:*              users:(("mongod",pid=1445,fd=15))           
LISTEN           0                128                              0.0.0.0:27021                          0.0.0.0:*              users:(("mongod",pid=1998,fd=15))           
LISTEN           0                128                              0.0.0.0:22                             0.0.0.0:*              users:(("sshd",pid=841,fd=3))               
LISTEN           0                128                                 [::]:27018                             [::]:*              users:(("mongod",pid=905,fd=16))            
LISTEN           0                128                                 [::]:27019                             [::]:*              users:(("mongod",pid=1446,fd=16))           
LISTEN           0                128                                 [::]:27020                             [::]:*              users:(("mongod",pid=1445,fd=16))           
LISTEN           0                128                                 [::]:27021                             [::]:*              users:(("mongod",pid=1998,fd=16))           
LISTEN           0                128                                 [::]:22                                [::]:*              users:(("sshd",pid=841,fd=4))

5.  在mongo-01上初始化配置Shard Server 01副本集

su - mongodb
mongosh --host mongo-01 --port 27019

使用admin数据库,定义副本集配置:
use admin

定义config变量,"arbiterOnly":true 代表其为仲裁节点:
config = {_id: "shardsvr_rs1", members: [
    {_id: 0, host: "192.168.83.10:27019"},
    {_id: 1, host: "192.168.83.11:27019"},
    {_id: 2, host: "192.168.83.12:27019",arbiterOnly:true},
  ]
}

注:_id: "shardsvr_rs1"应与shard-1.yml配置文件中的replSetName名称一致,"members" 中的 "host" 为三个节点的IP/主机名和Shard Server 1 Port

初始化副本集:
rs.initiate(config);

查看此时状态:
admin> rs.status()
{
  set: 'shardsvr_rs1',
  date: ISODate('2024-03-07T13:14:37.381Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1709817276, i: 21 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-07T13:14:36.721Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1709817276, i: 21 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1709817276, i: 21 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1709817276, i: 21 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-07T13:14:36.721Z'),
    lastDurableWallTime: ISODate('2024-03-07T13:14:36.721Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1709817265, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-07T13:14:36.142Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1709817265, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1709817265, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-03-07T13:14:36.164Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-07T13:14:36.675Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.83.10:27019',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 69,
      optime: { ts: Timestamp({ t: 1709817276, i: 21 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:14:36.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:14:36.721Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:14:36.721Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1709817276, i: 1 }),
      electionDate: ISODate('2024-03-07T13:14:36.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.83.11:27019',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 11,
      optime: { ts: Timestamp({ t: 1709817265, i: 1 }), t: Long('-1') },
      optimeDurable: { ts: Timestamp({ t: 1709817265, i: 1 }), t: Long('-1') },
      optimeDate: ISODate('2024-03-07T13:14:25.000Z'),
      optimeDurableDate: ISODate('2024-03-07T13:14:25.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:14:36.721Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:14:36.721Z'),
      lastHeartbeat: ISODate('2024-03-07T13:14:36.155Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:14:37.158Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.83.12:27019',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 11,
      lastHeartbeat: ISODate('2024-03-07T13:14:36.155Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:14:36.154Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709817276, i: 21 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709817276, i: 21 })
}

6. 在mongo-02上初始化配置Shard Server 02副本集

su - mongodb
mongosh --host mongo-02 --port 27020

使用admin数据库,定义副本集配置:
use admin

定义config变量,"arbiterOnly":true 代表其为仲裁节点:
config = {_id: "shardsvr_rs2", members: [
    {_id: 0, host: "192.168.83.10:27020",arbiterOnly:true},
    {_id: 1, host: "192.168.83.11:27020"},
    {_id: 2, host: "192.168.83.12:27020"},
  ]
}

注:_id: "shardsvr_rs2"应与shard-2.yml配置文件中的replSetName名称一致,"members" 中的 "host" 为三个节点的IP/主机名和Shard Server 2 Port

初始化副本集:
rs.initiate(config);

查看此时状态:
admin> rs.status()
{
  set: 'shardsvr_rs2',
  date: ISODate('2024-03-07T13:17:19.958Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-07T13:17:18.479Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-07T13:17:18.479Z'),
    lastDurableWallTime: ISODate('2024-03-07T13:17:18.479Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1709817426, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-07T13:17:17.875Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1709817426, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1709817426, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-03-07T13:17:17.895Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-07T13:17:18.436Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.83.10:27020',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 13,
      lastHeartbeat: ISODate('2024-03-07T13:17:19.891Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:17:19.886Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 1,
      name: '192.168.83.11:27020',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 68,
      optime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:17:18.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:17:18.479Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:17:18.479Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1709817437, i: 1 }),
      electionDate: ISODate('2024-03-07T13:17:17.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: '192.168.83.12:27020',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 13,
      optime: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1709817438, i: 6 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:17:18.000Z'),
      optimeDurableDate: ISODate('2024-03-07T13:17:18.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:17:18.479Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:17:18.479Z'),
      lastHeartbeat: ISODate('2024-03-07T13:17:19.892Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:17:18.889Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.83.11:27020',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709817438, i: 6 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709817438, i: 6 })
}

 6. 在mongo-03上初始化配置Shard Server 03副本集

su - mongodb
mongosh --host mongo-03 --port 27021

使用admin数据库,定义副本集配置:
use admin

定义config变量,"arbiterOnly":true 代表其为仲裁节点:
config = {_id: "shardsvr_rs3", members: [
    {_id: 0, host: "192.168.83.10:27021"},
    {_id: 1, host: "192.168.83.11:27021",arbiterOnly:true},
    {_id: 2, host: "192.168.83.12:27021"},
  ]
}

注:_id: "shardsvr_rs3"应与shard-3.yml配置文件中的replSetName名称一致,"members" 中的 "host" 为三个节点的IP/主机名和Shard Server 3 Port

初始化副本集:
rs.initiate(config);

查看此时状态:
admin> rs.status()
{
  set: 'shardsvr_rs3',
  date: ISODate('2024-03-07T13:20:22.521Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-07T13:20:16.940Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-07T13:20:16.940Z'),
    lastDurableWallTime: ISODate('2024-03-07T13:20:16.940Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1709817606, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-07T13:19:26.887Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1709817556, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1709817556, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-03-07T13:19:26.917Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-07T13:19:27.455Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.83.10:27021',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 65,
      optime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:20:16.000Z'),
      optimeDurableDate: ISODate('2024-03-07T13:20:16.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:20:16.940Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:20:16.940Z'),
      lastHeartbeat: ISODate('2024-03-07T13:20:20.945Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:20:21.939Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.83.12:27021',
      syncSourceId: 2,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 1,
      name: '192.168.83.11:27021',
      health: 1,
      state: 7,
      stateStr: 'ARBITER',
      uptime: 65,
      lastHeartbeat: ISODate('2024-03-07T13:20:20.945Z'),
      lastHeartbeatRecv: ISODate('2024-03-07T13:20:20.944Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.83.12:27021',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 105,
      optime: { ts: Timestamp({ t: 1709817616, i: 1 }), t: Long('1') },
      optimeDate: ISODate('2024-03-07T13:20:16.000Z'),
      lastAppliedWallTime: ISODate('2024-03-07T13:20:16.940Z'),
      lastDurableWallTime: ISODate('2024-03-07T13:20:16.940Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1709817566, i: 1 }),
      electionDate: ISODate('2024-03-07T13:19:26.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709817616, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709817616, i: 1 })
}

 五、创建Mongos Server路由服务器集群

1. 在Mongos Server 01上创建mongos配置文件

cat > ${MONGODB_HOME}/yaml/mongos.yml << EOF
processManagement:
  fork: true
  pidFilePath: ${MONGODB_HOME}/pidfile/mongos.pid
net:
  bindIpAll: true
  port: 27017
  ipv6: true
  maxIncomingConnections: 20000
systemLog:
  destination: file
  path: ${MONGODB_HOME}/logs/mongos/mongos.log
  logAppend: true
security:
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile
sharding:
  configDB: configsvr_rs/192.168.83.10:27018,192.168.83.11:27018,192.168.83.12:27018
EOF

注:初始配置时先使用#注释掉这三行,待完成整个集群配置后再开启

security:
  keyFile: /opt/mongo7.0/pki/mongo.keyfile
  clusterAuthMode: keyFile

2. 在Mongos Server 01上创建mongos启动文件

cat > /usr/lib/systemd/system/mongodb-mongos-server.service << EOF
[Unit]
Description=MongoDB Mongos Server
After=mongodb-config-server.service mongodb-shard-1.service mongodb-shard-2.service mongodb-shard-3.service network.target

[Service]
Type=forking
User=mongodb
Group=mongodb
PIDFile=/opt/mongo7.0/pidfile/mongos.pid
ExecStart=/opt/mongo7.0/bin/mongos --config /opt/mongo7.0/yaml/mongos.yml
ExecStop=/opt/mongo7.0/bin/mongos --shutdown --config /opt/mongo7.0/yaml/mongos.yml

[Install]
WantedBy=multi-user.target
EOF

3.  将mongos的启动配置文件和启动文件拷贝至其他Mongos Server节点

scp ${MONGODB_HOME}/yaml/mongos.yml root@mongo-02:${MONGODB_HOME}/yaml
scp ${MONGODB_HOME}/yaml/mongos.yml root@mongo-03:${MONGODB_HOME}/yaml
scp /usr/lib/systemd/system/mongodb-mongos-server.service root@mongo-02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/mongodb-mongos-server.service root@mongo-03:/usr/lib/systemd/system/

ssh root@mongo-02 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/mongos.yml"
ssh root@mongo-03 "chown mongodb:mongodb ${MONGODB_HOME}/yaml/mongos.yml"

4. 在Mongos Server各节点启动Mongos服务

systemctl daemon-reload
systemctl enable mongodb-mongos-server

su - mongodb
sudo systemctl start mongodb-mongos-server
sudo systemctl status mongodb-mongos-server

$ sudo ss -lntp | grep 27017
LISTEN 0      128          0.0.0.0:27017      0.0.0.0:*    users:(("mongos",pid=2426,fd=15))
LISTEN 0      128             [::]:27017         [::]:*    users:(("mongos",pid=2426,fd=16))

5. 在任一台Mongos Server上启用分片机制

su - mongodb
mongosh --host localhost --port 27017

使用admin数据库,定义副本集配置:
use admin

添加各个分片集群
sh.addShard("shardsvr_rs1/192.168.83.10:27019,192.168.83.11:27019,192.168.83.12:27019")
sh.addShard("shardsvr_rs2/192.168.83.10:27020,192.168.83.11:27020,192.168.83.12:27020")
sh.addShard("shardsvr_rs3/192.168.83.10:27021,192.168.83.11:27021,192.168.83.12:27021")

注:添加各集群分片是如提示
sh.addShard("shardsvr_rs1/192.168.83.10:27019,192.168.83.11:27019,192.168.83.12:27019")
MongoServerError[OperationFailed]: Cannot add shardsvr_rs1/192.168.83.10:27019,192.168.83.11:27019,192.168.83.12:27019 as a shard since the implicit default write concern on this shard is set to {w : 1}, because number of arbiters in the shard's configuration caused the number of writable voting members not to be strictly more than the voting majority. Change the shard configuration or set the cluster-wide write concern using the setDefaultRWConcern command and try again.
这是因为在mongos实例中我们使用命令db.adminCommand({ "getDefaultRWConcern": 1 }) 可以查看到当前mongos默认设置的写入安全机制defaultWriteConcern,默认是 majority (多数确认),这是mongodb5.0后开始的默认设置 ,这意味着当进行写操作时,至少要有超过大多数的数据节点确认写操作成功,才会返回成功的响应,目前我们是3个节点,mongo系统定义的一半节点数是 (3+1)/2=2,需要超过2,也就是至少要有3个节点写入成功才行,但是我们设置了一个 仲裁节点,导致3个shard节点中,只有2个可写数据的节点,怎么也不会写成功了,所以导致失败

解决方法,将写入安全级别调低,让后再添加分片
db.adminCommand({  "setDefaultRWConcern" : 1,  "defaultWriteConcern" : {    "w" : 1  }})

"w" : 1 只要主节点写入成功,就直接返回成功的响应,而不管副节点的同步情况
"w" : majority 超过节点半数【(节点数+1)/2】写入成功,才返回成功响应
"w" : 0 不等待任何节点确认写操作,只需写入到内存就返回成功,这是最低级别的写安全级别,这个配置可以提供写入性能,但也有一定的风险
"w" : 等待指定数量的节点确认写成功

[direct: mongos] admin> db.adminCommand({  "setDefaultRWConcern" : 1,  "defaultWriteConcern" : {    "w" : 1  }})
{
  defaultReadConcern: { level: 'local' },
  defaultWriteConcern: { w: 1, wtimeout: 0 },
  updateOpTime: Timestamp({ t: 1709818612, i: 1 }),
  updateWallClockTime: ISODate('2024-03-07T13:36:52.332Z'),
  defaultWriteConcernSource: 'global',
  defaultReadConcernSource: 'implicit',
  localUpdateWallClockTime: ISODate('2024-03-07T13:36:52.332Z'),
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818612, i: 2 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818612, i: 2 })
}
[direct: mongos] admin> db.adminCommand({ "getDefaultRWConcern": 1 })
{
  defaultReadConcern: { level: 'local' },
  defaultWriteConcern: { w: 1, wtimeout: 0 },
  updateOpTime: Timestamp({ t: 1709818612, i: 1 }),
  updateWallClockTime: ISODate('2024-03-07T13:36:52.332Z'),
  defaultWriteConcernSource: 'global',
  defaultReadConcernSource: 'implicit',
  localUpdateWallClockTime: ISODate('2024-03-07T13:36:52.332Z'),
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818623, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818623, i: 1 })
}
[direct: mongos] admin> sh.addShard("shardsvr_rs1/192.168.83.10:27019,192.168.83.11:27019,192.168.83.12:27019")
{
  shardAdded: 'shardsvr_rs1',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818640, i: 5 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818640, i: 5 })
}
[direct: mongos] admin> sh.addShard("shardsvr_rs2/192.168.83.10:27020,192.168.83.11:27020,192.168.83.12:27020")
{
  shardAdded: 'shardsvr_rs2',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818659, i: 2 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818658, i: 5 })
}
[direct: mongos] admin> sh.addShard("shardsvr_rs3/192.168.83.10:27021,192.168.83.11:27021,192.168.83.12:27021")
{
  shardAdded: 'shardsvr_rs3',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818669, i: 3 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818669, i: 3 })
}

六、开启集群认证功能

1. 在mongos创建认证用户

su - mongodb
$ mongosh --host mongo-01 --port 27017
创建一个名为admin的超级管理员用户,该用户对所有数据库管理有root权限,可使用db.dropUser("admin")删除该用户
use admin
db.createUser({user: "admin", pwd: "abcd@123!", roles: [{ role: "root", db: "admin" }]})
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709818902, i: 5 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709818902, i: 5 })
}

账号验证操作:
admin> db.auth("admin", "abcd@123!")
{ ok: 1 }

权限说明:
Read:允许用户读取指定数据库。
readWrite:允许用户读写指定数据库。
dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile。
userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户。
clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限。
readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限。
userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限。
dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。
root:只在admin数据库中可用。超级账号,超级权限。

2. 分别在Shard Server 01/2/3的Primary节点,添加admin用户

2.1 在Shard Server 01上添加

$ mongosh --host mongo-01 --port 27019
use admin
db.createUser({user: "admin", pwd: "abcd@123!", roles: [{ role: "root", db: "admin" }]})
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709819198, i: 4 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709819198, i: 4 })
}

2.2 在Shard Server 02上添加

$ mongosh --host mongo-02 --port 27020
use admin
db.createUser({user: "admin", pwd: "abcd@123!", roles: [{ role: "root", db: "admin" }]})
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709819342, i: 3 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709819342, i: 3 })
}

2.3 在Shard Server 03上添加

$ mongosh --host mongo-03 --port 27021
use admin
db.createUser({user: "admin", pwd: "abcd@123!", roles: [{ role: "root", db: "admin" }]})
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709819418, i: 4 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1709819418, i: 4 })
}

七、开启各组件安全认证机制

1.  创建并在各mongodb节点运行此脚本去除Config Server和Shard Server各组件配置文件中安全部分的#注释号

cat > /opt/clear_annotate.sh << EOF
#!/bin/bash

# 文件路径
path="/opt/mongo7.0/yaml/"

# 需要处理的文件列表
files=("config_svr.yml" "shard-1.yml" "shard-2.yml" "shard-3.yml")

# 遍历文件列表
for file in "${files[@]}"
do
    # 检查文件是否存在
    if [ -f "${path}${file}" ]; then
        # 移除指定行前的#号
        sed -i '/security:/{s/^#//}' "${path}${file}"
        sed -i '/authorization:/{s/^#//}' "${path}${file}"
        sed -i '/keyFile:/{s/^#//}' "${path}${file}"
        sed -i '/clusterAuthMode:/{s/^#//}' "${path}${file}"
    else
        echo "文件 ${path}${file} 不存在."
    fi
done

echo "处理完成."

EOF

2. 按顺序重启所有节点的所有MongoDB组件

集群重启顺序:
1 config 集群 > 2 shard 集群 > 3 mongos 集群

systemctl restart mongodb-config-server
systemctl status mongodb-config-server

systemctl restart mongodb-shard-1
systemctl status mongodb-shard-1

systemctl restart mongodb-shard-2
systemctl status mongodb-shard-2

systemctl restart mongodb-shard-3
systemctl status mongodb-shard-3

systemctl restart mongodb-mongos-server
systemctl status mongodb-mongos-server

# ss -lntp
State            Recv-Q           Send-Q                     Local Address:Port                      Peer Address:Port          Process                                      
LISTEN           0                128                              0.0.0.0:27017                          0.0.0.0:*              users:(("mongos",pid=1806,fd=15))           
LISTEN           0                128                              0.0.0.0:27018                          0.0.0.0:*              users:(("mongod",pid=886,fd=15))            
LISTEN           0                128                              0.0.0.0:27019                          0.0.0.0:*              users:(("mongod",pid=1452,fd=15))           
LISTEN           0                128                              0.0.0.0:27020                          0.0.0.0:*              users:(("mongod",pid=1451,fd=15))           
LISTEN           0                128                              0.0.0.0:27021                          0.0.0.0:*              users:(("mongod",pid=1453,fd=15))           
LISTEN           0                128                              0.0.0.0:22                             0.0.0.0:*              users:(("sshd",pid=841,fd=3))               
LISTEN           0                128                                 [::]:27017                             [::]:*              users:(("mongos",pid=1806,fd=16))           
LISTEN           0                128                                 [::]:27018                             [::]:*              users:(("mongod",pid=886,fd=16))            
LISTEN           0                128                                 [::]:27019                             [::]:*              users:(("mongod",pid=1452,fd=16))           
LISTEN           0                128                                 [::]:27020                             [::]:*              users:(("mongod",pid=1451,fd=16))           
LISTEN           0                128                                 [::]:27021                             [::]:*              users:(("mongod",pid=1453,fd=16))           
LISTEN           0                128                                 [::]:22                                [::]:*              users:(("sshd",pid=841,fd=4))               

 注:在启动Shard Server组件时,至少需要两个Config Server节点起来,也就是Config Server集群可用才行

3. 使用admin用户登录mongos,创建普通用户

$ mongosh -u admin -p abcd@123! mongo-01:27017/admin

创建一个名为testuser的普通用户,该用户对testdb数据库有读写权限,可使用db.dropUser("testuser")删除该用户
use testdb
db.createUser({user:"testuser", pwd:"abcd#456", roles:[{role:"readWrite",db:"testdb"}]});
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1709820938, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('ddnnJbIZxnOm4NvBxeOvkyh+FwQ=', 0),
      keyId: Long('7343607543094050838')
    }
  },
  operationTime: Timestamp({ t: 1709820938, i: 1 })
}

账号验证操作:
testdb> db.auth("testuser", "abcd#456")
{ ok: 1 }

 4. 客户端连接mongodb集群

客户端连接多个mongos
mongosh mongodb://'testuser':'abcd#456'@192.168.83.10:27017,192.168.83.11:27017,192.168.83.12:27017/testdb?authSource=testdb

 

标签:07T13,03,mongodb,Long,2024,7.0,集群,MongoDB,ISODate
From: https://www.cnblogs.com/cn-jasonho/p/18024974

相关文章

  • Docker搭建Redis集群
    一、创建Redis网络dockernetworklsdockernetworkcreateredis--subnet192.168.100.0/24dockernetworkinspectredis二、创建Redis配置文件forportin$(seq16);\do\mkdir-p/docker-volume/redis-cluster/node-${port}/conftouch/docker-volume/redis-c......
  • 解决 Android studio Connect to 127.0.0.1:[/127.0.0.1] failed: Connection refused
    前言由于代理变更,androidstudio会有一系列报错,其中一个是Connectto127.0.0.1:xxxxxx[/127.0.0.1]failed:Connectionrefused网上答案大都太片面了,无法完全解决问题,这里列举出四个可能的原因,希望对大家有用问题如下建议一下四种方案都尝试下,我相信总有一种能......
  • mongodb备份与恢复
    在MongoDB中,备份可以通过多种方式进行,主要包括使用mongodump命令、文件系统快照或者复制集和分片集群的特性。1.使用mongodump进行备份mongodump是MongoDB自带的备份工具,它可以导出所有数据库的数据到BSON文件中。以下是一个基本的使用示例:mongodump--hostmongodb1.example......
  • burpsuit app 抓包 安卓7.0以上证书制作
    burpsuitapp抓包以及安卓7.0以上证书制作前言:今天在使用某校园跑脚本时需要上传token,只能抓包获取,但发现安卓7.0以上的证书不能直接导入,故记录一下制作以及导入方式。首先我们要从burpsuite的客户端或者官方网页获得证书,将证书放在kali中或其他带有openssl的linux系统中,对其......
  • k8s集群安装nfs持久化存储
    k8s集群安装nfs-server服务下载并修改yamlmkdir-p/root/yaml/addons/nfscd/root/yaml/addons/nfswgethttps://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml修改yaml内容,添加namespace[root@ku......
  • 【已解决】[图文步骤] message from server: “Host ‘172.17.0.1‘ is not allowed t
    写于2024.03.07北京.朝阳@目录报错信息环境现场解决方案步骤:成功最后报错信息先看看和你的报错一样不一样null,messagefromserver:"Host'172.17.0.1'isnotallowedtoconnecttothisMySQLserver"环境现场mac电脑使用docker部署了一个mysql。docker......
  • kettle MongoDB Output 配置说明
    基本配置ConfigureConnectionTab数据库连接Connectiontimeout:尝试连接数据库所等待的最大时间(毫秒),空为无限,建议5000Sockettimeout:sql在执行成功之前等待读写操作的时间(毫秒),空为无限,建议5000OutputOptionsTab输入表与相关设置Truncateoption:在数据传输前清空表......
  • Elasticsearch 集群网络配置实例
    网络配置在elasticsearch中,客户端通过http与es进行通信;es集群节点间主要通过transport进行通信。在不同的网络环境下,需要进行相应的网络配置调整。简单网络环境单网卡假设只有一个网络接口A,IP地址为176.33.2.101、http通信端口9200、transport通信端口9300。#监听的IPnetw......
  • 【Docker】Docker安装MongoDB最新版并连接使用附加docker常用命令
    【Docker】Docker安装MongoDB最新版并连接使用附加docker常用命令前言确保centos7已经安装docker,没安装docker的可以百度自行安装一、docker安装mongodb步骤1、docker拉取mongo镜像dockerpullmongo:latest2、查看本地镜像命令#查看镜像命令dockerimages#查看正在运......
  • Elasticsearch集群生产配置
    集群配置在组建集群时,需要额外添加集群相关的配置,如节点角色、集群发现、初始主节点、主节点选举和安全认证等,以下配置均在上篇创建Elasticsearch单机实例的基础上搭建。设置集群名称vielasticsearch.yml#统一集群名字cluster.name:my-application#移除该配置discover......