首页 > 数据库 >KingbaseES RAC运维案例之---集群及数据库管理

KingbaseES RAC运维案例之---集群及数据库管理

时间:2024-08-13 11:18:07浏览次数:5  
标签:00 RAC 运维 node201 clone kingbase --- gfs2 root

案例说明:
KingbaseES RAC在部署完成后,进行日常的集群及数据库管理。
适用版本:
KingbaseES V008R006C008M030B0010

操作系统版本:

[root@node201 KingbaseHA]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

集群架构:
如下所示,node1和node2为集群节点:

节点信息:

[root@node201 KingbaseHA]# vi /etc/hosts
192.168.1.201 node201
192.168.1.202 node202
192.168.1.203 node203    iscsi_Srv

一、集群数据库结构
1、数据库服务进程
如下所示,集群每个节点都有一个instance,可以访问共享数据库。手工使用sys_ctl在每个节点启动instance,每个instance对应一个pid文件:

[root@node201 KingbaseHA]# ps -ef |grep kingbase
kingbase 23496     1  0 11:05 ?        00:00:00 /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/Server/bin/kingbase -D /sharedata/data_gfs2/kingbase/data -c config_file=/sharedata/data_gfs2/kingbase/data/kingbase.conf -c log_directory=sys_log -h 0.0.0.0
kingbase 24164 23496  0 11:06 ?        00:00:00 kingbase: logger
kingbase 24165 23496  0 11:06 ?        00:00:00 kingbase: lmon
kingbase 24166 23496  0 11:06 ?        00:00:00 kingbase: lms   1
kingbase 24167 23496  0 11:06 ?        00:00:00 kingbase: lms   2
kingbase 24168 23496  0 11:06 ?        00:00:00 kingbase: lms   3
kingbase 24169 23496  0 11:06 ?        00:00:00 kingbase: lms   4
kingbase 24170 23496  0 11:06 ?        00:00:00 kingbase: lms   5
kingbase 24171 23496  0 11:06 ?        00:00:00 kingbase: lms   6
kingbase 24172 23496  0 11:06 ?        00:00:00 kingbase: lms   7
kingbase 24393 23496  0 11:06 ?        00:00:00 kingbase: checkpointer
kingbase 24394 23496  0 11:06 ?        00:00:00 kingbase: background writer
kingbase 24395 23496  0 11:06 ?        00:00:00 kingbase: global deadlock checker
kingbase 24396 23496  0 11:06 ?        00:00:00 kingbase: transaction syncer
kingbase 24397 23496  0 11:06 ?        00:00:00 kingbase: walwriter
kingbase 24398 23496  0 11:06 ?        00:00:00 kingbase: autovacuum launcher
kingbase 24399 23496  0 11:06 ?        00:00:00 kingbase: archiver   last was 00000001000000000000000E
kingbase 24402 23496  0 11:06 ?        00:00:00 kingbase: stats collector
kingbase 24403 23496  0 11:06 ?        00:00:00 kingbase: kwr collector
kingbase 24404 23496  0 11:06 ?        00:00:00 kingbase: ksh writer
kingbase 24405 23496  0 11:06 ?        00:00:00 kingbase: ksh collector
kingbase 24406 23496  0 11:06 ?        00:00:00 kingbase: logical replication launche

Tips:
lms进程处理集群请求与其他节点之间的通信。
lms会占用7个端口。

# 每个节点上有一个instance,实例进程id:
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/kingbase*.pid
-rw------- 1 kingbase kingbase 100 Aug 12 11:06 /sharedata/data_gfs2/kingbase/data/kingbase_1.pid
-rw------- 1 kingbase kingbase 100 Aug 12 11:06 /sharedata/data_gfs2/kingbase/data/kingbase_2.pid

2、数据存储架构
1)数据库存储目录data(存储在gfs2的共享文件系统上)

test=# show data_directory;
           data_directory
------------------------------------
 /sharedata/data_gfs2/kingbase/data
(1 row)

2)每个节点配置文件
默认所有实例访问data/kingbase.conf配置,可以为每个节点配置单独的配置文件(优先级高于数据库共享配置):

[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/kingbase*.conf
-rw------- 1 kingbase kingbase   0 Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_1.conf
-rw------- 1 kingbase kingbase   0 Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_2.conf
-rw------- 1 kingbase kingbase   0 Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_3.conf
-rw------- 1 kingbase kingbase   0 Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_4.conf
-rw------- 1 kingbase kingbase  88 Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase.auto.conf
-rw------- 1 kingbase kingbase 28K Aug  2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase.conf
# 启动节点配置文件:
kingbase.conf配置:
sub_config_file='/sharedata/data_gfs2/kingbase/data/kingbase_node.conf'

3)节点wal日志和sys_log日志:
如下所示,节点的wal日志和sys_log日志按照节点id单独存储在sys_wal下的子目录中:

# sys_wal日志
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/sys_wal
total 16K
drwx------ 3 kingbase kingbase 3.8K Aug 12 11:11 1
drwx------ 3 kingbase kingbase 3.8K Aug 12 11:11 2

# sys_log日志
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/sys_log
total 8.0K
drwx------ 2 kingbase kingbase 3.8K Aug 12 11:06 1
drwx------ 2 kingbase kingbase 3.8K Aug 12 11:05 2

二、启动集群及数据库
1、启动集群(all nodes)

[root@node201 ~]# cd /opt/KingbaseHA/
[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[  OK  ]
Starting Corosync Cluster Engine (corosync): [WARNING]
clean qdisk fence flag start
clean qdisk fence flag success
Starting Qdisk Fenced daemon (qdisk-fenced): [  OK  ]
Starting Corosync Qdevice daemon (corosync-qdevice): [  OK  ]
Waiting for quorate:.....................................................................................................................................[  OK  ]
Starting Pacemaker Cluster Manager[  OK  ]

2、查看资源状态

# 查看集群服务状态
[root@node201 KingbaseHA]# ./cluster_manager.sh status
corosync (pid 2937) is running...
pacemakerd (pid 3277) is running...
corosync-qdevice (pid 2955) is running...

[root@node201 KingbaseHA]# ./cluster_manager.sh --status_pacemaker
pacemakerd (pid 11521) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh  --status_corosync
corosync (pid 9924) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh  --status_qdevice
corosync-qdevice (pid 11499) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh   --status_qdisk_fenced
qdisk-fenced is stopped

# 如下所示dlm和gfs2的资源未加载
[root@node202 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Fri Aug  9 18:05:20 2024
  * Last change:  Fri Aug  9 18:01:06 2024 by hacluster via crmd on node201
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:  # 无资源被加载
  * No resources

3、启动PINGD、FIP、DB资源

[root@node201 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
3e934629-a2b8-4b7d-a153-ded2dbec7a28
config dlm and gfs2 resource success

如下所示,dlm、gfs2等资源被启动:
[root@node201 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 15:31:41 2024
  * Last change:  Mon Aug 12 15:31:31 2024 by root via cibadmin on node201
  * 2 nodes configured
  * 4 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:        # dlm和gfs2资源被加载和启动
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]
  * Started: [ node201 node202 ]

4、启动数据库资源
1)启动数据库资源

[root@node201 KingbaseHA]# ./cluster_manager.sh --config_rac_resource
crm configure DB resource start
crm configure DB resource end

2)查看集群资源配置
如下所示,数据库资源:DB:

[root@node201 ~]# crm config show
node 1: node201
node 2: node202
primitive DB ocf:kingbase:kingbase \
        params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/home/kingbase/log/kingbase1.log" \
        op start interval=0 timeout=120 \
        op stop interval=0 timeout=120 \
        op monitor interval=9s timeout=30 on-fail=stop \
        meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" allow_stonith_disabled=true \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive gfs2 Filesystem \
        params device="-U 3e934629-a2b8-4b7d-a153-ded2dbec7a28" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-DB DB \
        meta target-role=Started
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac \
        no-quorum-policy=freeze \
        stonith-enabled=false

3)查看数据库服务状态
如下所示,在查看集群资源状态,DB资源已经被启动:

[root@node201 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 15:32:50 2024
  * Last change:  Mon Aug 12 15:32:43 2024 by root via cibadmin on node201
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-DB [DB]:             # 数据库资源DB被加载和启动
    * Started: [ node201 node202 ]

4)数据库服务状态

[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp        0      0 0.0.0.0:55321           0.0.0.0:*               LISTEN      29041/kingbase

5)集群状态实时监控

[root@node201 ~]# crm_mon -1
Cluster Summary:
  * Stack: corosync
  * Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 11:20:47 2024
  * Last change:  Mon Aug 12 10:55:34 2024 by root via cibadmin on node201
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Active Resources:
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-DB [DB]:
    * Started: [ node201 node202 ]

5、停止集群

[root@node201 KingbaseHA]# ./cluster_manager.sh stop
Signaling Pacemaker Cluster Manager to terminate[  OK  ]
Waiting for cluster services to unload.......[  OK  ]
Signaling Qdisk Fenced daemon (qdisk-fenced) to terminate: [  OK  ]
Waiting for qdisk-fenced services to unload:..[  OK  ]
Signaling Corosync Qdevice daemon (corosync-qdevice) to terminate: [  OK  ]
Waiting for corosync-qdevice services to unload:.[  OK  ]
Signaling Corosync Cluster Engine (corosync) to terminate: [  OK  ]
Waiting for corosync services to unload:..[  OK  ]

# 另外节点查看资源状态:
[root@node202 KingbaseHA]# crm resource status
 fence_qdisk_0  (stonith:fence_qdisk):  Started
 fence_qdisk_1  (stonith:fence_qdisk):  Started
 Clone Set: clone-dlm [dlm]
     Started: [ node201 node202 ]
 Clone Set: clone-gfs2 [gfs2]
     Started: [ node201 node202 ]
 Clone Set: clone-DB [DB]
     Stopped (disabled): [ node201 node202 ]

三、资源自动恢复
KingbaseRAC以资源的形式管理数据库,当使用sys_ctl stop或者kill数据库服务后,pacemaker会自动拉起资源:
1、关闭数据库服务

[kingbase@node201 bin]$ ./sys_ctl stop -D /sharedata/data_gfs2/kingbase/data/
waiting for server to shut down................... done
server stopped

2、查看资源状态
如下所示,pacemaker监控到DB资源运行状态异常:

[root@node201 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 11:56:05 2024
  * Last change:  Mon Aug 12 11:53:25 2024 by root via cibadmin on node202
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node201 node202 ]
Full List of Resources:
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-DB [DB]:
    * DB        (ocf::kingbase:kingbase):        Stopping node202
    * DB        (ocf::kingbase:kingbase):        FAILED node201

Failed Resource Actions:
  * DB_monitor_9000 on node201 'not running' (7): call=35, status='complete', exitreason='', last-rc-change='2024-08-12 11:56:04 +08:00', queued=0ms, exec=0ms

3、数据库资源恢复正常
如下所示,一段时间后,数据库资源被pacemaker拉起:

[root@node201 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 13:56:02 2024
  * Last change:  Mon Aug 12 11:53:25 2024 by root via cibadmin on node202
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-DB [DB]:
    * Started: [ node201 node202 ]

# 数据库服务运行正常
[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp        0      0 0.0.0.0:55321           0.0.0.0:*               LISTEN      20963/kingbase

四、访问数据库

[kingbase@node201 bin]$ ./ksql -U system test -p 55321
Type "help" for help.

prod=# select * from t1 limit 10;
 id | name
----+-------
  1 | usr1
  2 | usr2
  3 | usr3
  4 | usr4
  5 | usr5
  6 | usr6
  7 | usr7
  8 | usr8
  9 | usr9
 10 | usr10
(10 rows)


[kingbase@node202 bin]$ ./ksql -U system test -p 55321
Type "help" for help.

test=# \c prod
prod=# select count(*) from t1;
 count
-------
  1000
(1 row)

五、附件

故障1:集群服务启动失败
如下所示i,集群服务启动异常:

[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[  OK  ]
Starting Corosync Cluster Engine (corosync): [WARNING]
clean qdisk fence flag start

查看集群配置:

[root@node201 ~]# cat /opt/KingbaseHA/cluster_manager.conf|grep fence
################# fence #################
enable_fence=1

配置enable_fence=0后,启动集群:

[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[  OK  ]
Starting Corosync Cluster Engine (corosync): [WARNING]
Starting Corosync Qdevice daemon (corosync-qdevice): [  OK  ]
Waiting for quorate:...........[  OK  ]
Starting Pacemaker Cluster Manager[  OK  ]

故障2:crm resource start clone-DB失败
1)启动集群服务

[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[  OK  ]
Starting Corosync Cluster Engine (corosync): [WARNING]
Starting Corosync Qdevice daemon (corosync-qdevice): [  OK  ]
Waiting for quorate:...........[  OK  ]
Starting Pacemaker Cluster Manager[  OK  ]

2)查看集群资源状态
如下所示dlm和gfs2的资源未加载

[root@node202 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Fri Aug  9 18:05:20 2024
  * Last change:  Fri Aug  9 18:01:06 2024 by hacluster via crmd on node201
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:  # 无资源被加载
  * No resources

3)启动PINGD、FIP、DB资源

[root@node201 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
3e934629-a2b8-4b7d-a153-ded2dbec7a28
config dlm and gfs2 resource success

如下所示,dlm、gfs2等资源被启动,但是仍然缺失数据库DB资源:
[root@node201 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 15:31:41 2024
  * Last change:  Mon Aug 12 15:31:31 2024 by root via cibadmin on node201
  * 2 nodes configured
  * 4 resource instances configured

Node List:
  * Online: [ node201 node202 ]

Full List of Resources:        # dlm和gfs2资源被加载和启动
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]

4)配置数据库DB资源

crm configure primitive DB ocf:kingbase:kingbase \
     params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" \
     ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" \
     sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" \
     kb_data="/sharedata/data_gfs2/kingbase/data" \
     kb_dba="kingbase" kb_host="0.0.0.0" \
     kb_user="system" \
     kb_port="55321" \
     kb_db="template1" \
     logfile="/home/kingbase/log/kingbase1.log" \
     op start interval="0" timeout="120" \
     op stop interval="0" timeout="120" \
     op monitor interval="9s" timeout="30" on-fail=stop \
     meta failure-timeout=5min target-role=Stopped
		
[root@node201 KingbaseHA]# crm configure primitive DB ocf:kingbase:kingbase \
>      params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" \
>      ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" \
>      sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" \
>      kb_data="/sharedata/data_gfs2/kingbase/data" \
>      kb_dba="kingbase" kb_host="0.0.0.0" \
>      kb_user="system" \
>      kb_port="55321" \
>      kb_db="template1" \
>      logfile="/home/kingbase/log/kingbase1.log" \
>      op start interval="0" timeout="120" \
>      op stop interval="0" timeout="120" \
>      op monitor interval="9s" timeout="30" on-fail=stop \
>      meta failure-timeout=5min target-role=Stopped

# 配置为clone资源及配置资源启动顺序
[root@node201 KingbaseHA]# crm configure clone clone-DB DB
[root@node201 KingbaseHA]# crm configure order cluster-order2 clone-dlm clone-gfs2 clone-DB

5)查看集群资源
如下所示,在集群资源中增加了DB资源:

[root@node201 KingbaseHA]# crm config show
node 1: node201
node 2: node202
primitive DB ocf:kingbase:kingbase \
        params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/home/kingbase/log/kingbase1.log" \
        op start interval=0 timeout=120 \
        op stop interval=0 timeout=120 \
        op monitor interval=9s timeout=30 on-fail=stop \
        meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" allow_stonith_disabled=true \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive gfs2 Filesystem \
        params device="-U 3e934629-a2b8-4b7d-a153-ded2dbec7a28" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-DB DB \
        meta target-role=Started
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac \
        no-quorum-policy=freeze \
        stonith-enabled=false
        
[root@node201 KingbaseHA]# crm config verify
[root@node201 KingbaseHA]# crm config commit  

6)启动数据库资源服务

[root@node201 KingbaseHA]# crm resource start clone-DB
[root@node201 KingbaseHA]# crm resource status clone-DB
resource clone-DB is running on: node201
resource clone-DB is running on: node202
# 数据库服务被启动
[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp        0      0 0.0.0.0:55321           0.0.0.0:*               LISTEN      3240/kingbase


查看数据库服务状态:
[root@node202 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Mon Aug 12 14:57:06 2024
  * Last change:  Mon Aug 12 14:56:00 2024 by root via cibadmin on node201
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node201 node202 ]
Full List of Resources:
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node201 node202 ]
  * Clone Set: clone-DB [DB]:   # 数据库资源服务被加载和启动 
    * Started: [ node201 node202 ]

六、清理及卸载集群
1、清理集群配置环境(all nodes)

[root@node201 KingbaseHA]# ./cluster_manager.sh --clean_all
clean all start
Pacemaker Cluster Manager is already stopped[  OK  ]
clean env variable start
clean env variable success
clean host start
clean host success
remove pacemaker daemon user start
remove pacemaker daemon user success
clean all success

# 如下所示,集群配置被清理
[root@node201 KingbaseHA]# crm config show
ERROR: running cibadmin -Ql: Connection to the CIB manager failed: Transport endpoint is not connected
Init failed, could not perform requested operations
ERROR: configure: Missing requirements

[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[  OK  ]
./cluster_manager.sh: line 1143: /etc/init.d/corosync: No such file or directory

2、卸载集群(all nodes)
如下所示,执行集群卸载,将删除/opt/KingbaseHA目录:

[root@node202 KingbaseHA]# ./cluster_manager.sh --uninstall
uninstall start
./cluster_manager.sh: line 1276: /etc/init.d/pacemaker: No such file or directory
./cluster_manager.sh: line 1335: /etc/init.d/corosync-qdevice: No such file or directory
./cluster_manager.sh: line 1148: /etc/init.d/corosync: No such file or directory
clean env variable start
clean env variable success
clean host start
clean host success
remove pacemaker daemon user start
userdel: user 'hacluster' does not exist
groupdel: group 'haclient' does not exist
remove pacemaker daemon user success
uninstall success

# /opt/KingbaseHA目录被删除
[root@node202 KingbaseHA]#  ls -lh /opt/KingbaseHA/
ls: cannot access /opt/KingbaseHA/: No such file or directory

标签:00,RAC,运维,node201,clone,kingbase,---,gfs2,root
From: https://www.cnblogs.com/tiany1224/p/18356480

相关文章

  • docker-compose 部署https harbor
    httpsharbor提升安全性,部署更加合规一、配置Harbor证书1、生成自签名ca和ca证书,subj信息看需求修改这里的-subj参数设置了证书的主题信息,包括国家代码(C),州或省份(ST),城市(L),组织(O),和常用名(CN)。有效期设置为20年。#生成CA私钥opensslgenrsa-outca.key4......
  • CMake-正规程序编译
    正规组织结构下编译正规组织结构指的是文件组织结构规范工整。一般情况下分为binbuildincludesrclib这些文件夹。例如在写使用线程进行tcp连接的demo中,组织架构如下:bin中存放可执行的二进制文件;build为编译文件夹;include存放所有的头文件;lib存放库文件,本文中没有使用;s......
  • 信号处理卡 数据收发卡设计方案:428-基于XC7Z100+ADRV9009的双收双发无线电射频板卡 5G
    数据收发卡设计方案:428-基于XC7Z100+ADRV9009的双收双发无线电射频板卡5G小基站无线图传基于XC7Z100+ADRV9009的双收双发无线电射频板卡一、板卡概述        基于XC7Z100+ADRV9009的双收双发无线电射频板卡是基于Xilinx ZYNQ FPGA和ADI的无线收发芯片ADRV90......
  • chapter11------进入保护模式
    全局描述符表(GDT)这里要先说明下,保护模式下对内存段的访问是有限制的,简单来说就是你不能再随意的访问了,只能访问授权给你的,然后段的访问限制等等信息就记载在一个叫做全局描述表里段描述符段描述符存储了某个段的具体信息,就像我们每个人的档案一样,记录着我们的信息然后段描述......
  • Drop-seq测序平台dge.txt.gz格式转化成h5格式
    dge.txt.gz格式简介dge.txt.gz格式是Drop-seqformat(一个单细胞RNA测序平台,三种常见基于液滴的单细胞RNA测序平台10XGenomicsChromium、inDrop和Drop-seq),也可能命名为.digital_expression.txt.gz。Drop-seq测序平台官网dge.txt格式转化成h5格式因为这个格式确实少见,所以把......
  • 游戏安全入门-扫雷分析&远程线程注入
    前言无论学习什么,首先,我们应该有个目标,那么入门windows游戏安全,脑海中浮现出来的一个游戏--扫雷,一款家喻户晓的游戏,虽然已经被大家分析的不能再透了,但是我觉得自己去分析一下还是极好的,把它作为一个小目标再好不过了。我们编写一个妙妙小工具,工具要求实现以下功能:时间暂停、修......
  • 软件无线电系统 高速图像采集卡 设计原理图: 613-基于6UVPX C6678+XCVU9P的信号处理板
    基于6UVPXC6678+XCVU9P的信号处理板卡一、板卡概述      板卡基于6U VPX标准结构,包含一个C6678 DSP芯片,一个XCVU9P 高性能FPGA,双路HPC FMC。 二、处理板技术指标•  DSP处理器采用TI 8核处理器TMS320C6678;•  DSP 外挂一组64bit DDR3颗粒,总容量2GB,数据......
  • 距离-有这么多类
    在做分类时常常需要估算不同样本之间的相似性度量(SimilarityMeasurement),这时通常采用的方法就是计算样本间的“距离”(Distance)。采用什么样的方法计算距离是很讲究,甚至关系到分类的正确与否。本文的目的就是对常用的相似性度量作一个总结。本文目录:1.欧氏距离2.曼哈顿......
  • hdu7462-字符串【SAM,二分】
    正题题目链接:https://acm.hdu.edu.cn/showproblem.php?pid=7462题目大意你有一个由\(a,b\)组成的字符串\(s\)。有\(m\)个操作:询问有多少个本质不同的串\(t\)使得\(s[l,r]\)是\(t\)的子串且两个串在\(s\)中的出现次数相同。询问有多少个本质不同的串\(t\)......
  • 【学习笔记6】论文SQLfuse: Enhancing Text-to-SQL Performance through Comprehensiv
    Abstract        Text-to-SQL转换是一项关键创新,简化了从复杂SQL语句到直观自然语言查询的转换,尤其在SQL在各类岗位中广泛应用的情况下,这一创新显得尤为重要。随着GPT-3.5和GPT-4等大型语言模型(LLMs)的兴起,这一领域得到了极大的推动,提供了更好的自然语言理解......