首页 > 其他分享 >KingbaseES RAC 部署案例之---单节点部署案例

KingbaseES RAC 部署案例之---单节点部署案例

时间:2024-08-27 11:53:41浏览次数:13  
标签:RAC gfs2 部署 node203 kingbase 案例 dlm root KingbaseHA

案例说明:
KingbaseES RAC 在单节点上部署。
适用版本:
KingbaseES V008R006C008M030B0010

操作系统:

[root@node201 KingbaseHA]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

集群节点信息:

[root@node203 ~]# cat /etc/hosts
.......
192.168.1.203 node203

一、系统环境准备
参考博文:https://www.cnblogs.com/tiany1224/p/18342848

如下所示磁盘环境:

[root@node203 ~]# fdisk -l
Disk /dev/sdh: 134 MB, 134217728 bytes, 262144 sectors
......
Disk /dev/sdi: 4294 MB, 4294967296 bytes, 8388608 sectors
......

二、部署和配置RAC

1、安装数据库软件

[root@node203 soft]# mount -o loop KingbaseES_V008R006C008M030B0010_Lin64_install.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only

[kingbase@node203 mnt]$ sh setup.sh
Now launch installer...
Choose the server type
----------------------
Please choose the server type :
  ->1- default
    2- rac

  Default Install Folder: /opt/Kingbase/ES/V8

2、创建集群部署目录
如下所示,进入数据库软件部署目录,执行集群脚本,默认创建"/opt/KingbaseHA"目录:

[root@node203 script]# pwd
/opt/Kingbase/ES/V8/install/script
[root@node203 script]# ls -lh
total 32K
-rwxr-xr-x 1 kingbase kingbase  321 Jul 18 14:17 consoleCloud-uninstall.sh
-rwxr-x--- 1 kingbase kingbase 3.6K Jul 18 14:17 initcluster.sh
-rwxr-x--- 1 kingbase kingbase  289 Jul 18 14:17 javatools.sh
-rwxr-xr-x 1 kingbase kingbase  553 Jul 18 14:17 rootDeployClusterware.sh
-rwxr-x--- 1 kingbase kingbase  767 Jul 18 14:17 root.sh
-rwxr-x--- 1 kingbase kingbase  627 Jul 18 14:17 rootuninstall.sh
-rwxr-x--- 1 kingbase kingbase 3.7K Jul 18 14:17 startupcfg.sh
-rwxr-x--- 1 kingbase kingbase  252 Jul 18 14:17 stopserver.sh

# 执行脚本
[root@node203 script]# sh rootDeployClusterware.sh
cp: cannot stat ‘@@INSTALL_DIR@@/KingbaseHA/*’: No such file or directory

# 修改脚本变量
[root@node203 V8]# head  install/script/rootDeployClusterware.sh
#!/bin/sh
# copy KingbaseHA to /opt/KingbaseHA
ROOT_UID=0
#INSTALLDIR=@@INSTALL_DIR@@
INSTALLDIR=/opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010

# 执行脚本(创建/opt/KingbaseHA)
[root@node203 V8]# sh install/script/rootDeployClusterware.sh
/opt/KingbaseHA has existed. Do you want to override it?(y/n)y
y
[root@node203 V8]# ls -lh /opt/KingbaseHA/
total 64K
-rw-r--r--  1 root root 3.8K Jul 30 17:38 cluster_manager.conf
-rwxr-xr-x  1 root root  54K Jul 30 17:38 cluster_manager.sh
drwxr-xr-x  9 root root  121 Jul 30 17:38 corosync
drwxr-xr-x  7 root root  122 Jul 30 17:38 corosync-qdevice
drwxr-xr-x  8 root root   68 Jul 30 17:38 crmsh
drwxr-xr-x  7 root root   65 Jul 30 17:38 dlm-dlm
drwxr-xr-x  5 root root   39 Jul 30 17:38 fence_agents
drwxr-xr-x  5 root root   60 Jul 30 17:38 gfs2
drwxr-xr-x  6 root root   53 Jul 30 17:38 gfs2-utils
drwxr-xr-x  5 root root   39 Jul 30 17:38 ipmi_tool
drwxr-xr-x  7 root root   84 Jul 30 17:38 kingbasefs
drwxr-xr-x  5 root root   42 Jul 30 17:38 kronosnet
drwxr-xr-x  2 root root 4.0K Jul 30 17:38 lib
drwxr-xr-x  2 root root   28 Jul 30 17:38 lib64
drwxr-xr-x  7 root root   63 Jul 30 17:38 libqb
drwxr-xr-x 10 root root  136 Jul 30 17:38 pacemaker
drwxr-xr-x  6 root root   52 Jul 30 17:38 python2.7

3、集群部署配置

[root@node203 KingbaseHA]# cat cluster_manager.conf
######################################## Basic Configuration ####################################
################# install #################
##cluster node information
cluster_name=krac
node_name=(node203)
node_ip=(192.168.1.203)

##voting disk, used for qdevice
enable_qdisk=1
votingdisk=/dev/sdh                 # vote投票盘

##shared data disk, used for gfs2
sharedata_dir=/sharedata/data_gfs2
sharedata_disk=/dev/sdi             # 集群数据共享存储

################# common ################
##cluster manager install dir
install_dir=/opt/KingbaseHA
env_bash_file=/root/.bashrc

##pacemaker
pacemaker_daemon_group=haclient
pacemaker_daemon_user=hacluster

##kingbase owner and install_dir
kingbaseowner=kingbase
kingbasegroup=kingbase
kingbase_install_dir=/opt/Kingbase/ES/V8/Server

################# crm_dsn #################
##crm_dsn, used for configuring data source connection string information.
database="test"
username="system"
# If loged in to database without password,
# the item of password could not be provided.
password="123456"
# Do not add '-D' parameter to 'initdb_options'.
initdb_options="-A trust -U $username"
......
######################################## For KingbaseES RAC ########################################
##if install KingbaseES RAC, set 'install_rac' to 1,else set it to 0
install_rac=1

##KingbaseES RAC params
rac_port=55321
rac_lms_port=53444
rac_lms_count=7
###################

4、投票盘初始化

[root@node203 KingbaseHA]# ./cluster_manager.sh --qdisk_init
qdisk init start
Writing new quorum disk label 'krac' to /dev/sdh.
WARNING: About to destroy all data on /dev/sdh; proceed? (Y/N):
y
/dev/block/8:112:
/dev/disk/by-id/ata-VBOX_HARDDISK_VB049744dd-80024550:
/dev/disk/by-path/pci-0000:00:0d.0-ata-8.0:
/dev/sdh:
        Magic:                eb7a62c2
        Label:                krac
        Created:              Tue Aug 20 10:44:53 2024
        Host:                 node203
        Kernel Sector Size:   512
        Recorded Sector Size: 512

qdisk init success

5、初始化数据盘

[root@node203 KingbaseHA]# ./cluster_manager.sh --cluster_disk_init
rac disk init start
This will destroy any data on /dev/sdi
Are you sure you want to proceed? (Y/N): y
Adding journals: Done
Building resource groups: Done
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/sdi
Block size:                4096
Device size:               4.00 GB (1048576 blocks)
Filesystem size:           4.00 GB (1048575 blocks)
Journals:                  2
Journal size:              32MB
Resource groups:           18
Locking protocol:          "lock_dlm"
Lock table:                "krac:gfs2"
UUID:                      dc81ac11-8871-44b1-829a-e2df8573c5d5
rac disk init success

6、基础组件初始化(all nodes)
在节点执行如下命令,初始化所有基础组件,如corosync,pacemaker,corosync-qdevice。

[root@node203 KingbaseHA]# ./cluster_manager.sh --base_configure_init
init kernel soft watchdog start
init kernel soft watchdog success
config host start
config host success
add env varaible in /root/.bashrc
add env variable success
config corosync.conf start
config corosync.conf success
Starting Corosync Cluster Engine (corosync): [  OK  ]
add pacemaker daemon user start
groupadd: group 'haclient' already exists
useradd: user 'hacluster' already exists
add pacemaker daemon user success
config pacemaker success
Starting Pacemaker Cluster Manager[  OK  ]
config qdevice start
config qdevice success
Starting Qdisk Fenced daemon (qdisk-fenced): [  OK  ]
Starting Corosync Qdevice daemon (corosync-qdevice): [  OK  ]
Please note the configuration: superuser(system) and port(36321) for database(test) of resource(DB0)
Please note the configuration: superuser(system) and port(36321) for database(test) of resource(DB1)
config kingbase rac start
/opt/Kingbase/ES/V8/Server/log already exist
config kingbase rac success
add_udev_rule start
add_udev_rule success
insmod dlm.ko success
check and mknod for dlm start
check and mknod for dlm success

应用环境变量:
[root@node203 KingbaseHA]# source /root/.bashrc

查看集群资源状态:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 10:50:49 2024
  * Last change:  Tue Aug 20 10:48:41 2024 by hacluster via crmd on node203
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * No resources

7、gfs2相关资源初始化(all nodes)
如下所示,更新系统gfs2内核模块:

[root@node203 KingbaseHA]# ./cluster_manager.sh --init_gfs2
init gfs2 start
current OS kernel version does not support updating gfs2, please confirm whether to continue? (Y/N):
y
init the OS native gfs2 success

8、配置集群资源配置( fence、dlm 和 gfs2 资源)

[root@node203 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
dc81ac11-8871-44b1-829a-e2df8573c5d5

config dlm and gfs2 resource success

查看集群资源状态:
如下所示,集群资源增加了dlm和gfs2的资源:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 10:51:53 2024
  * Last change:  Tue Aug 20 10:51:52 2024 by root via cibadmin on node203
  * 1 node configured
  * 3 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * fence_qdisk_0       (stonith:fence_qdisk):   Started node203
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node203 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * gfs2      (ocf::heartbeat:Filesystem):     Starting node203

查看集群资源配置信息:

[root@node203 KingbaseHA]# crm config show
node 1: node203
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive fence_qdisk_0 stonith:fence_qdisk \
        params qdisk_path="/dev/sdh" qdisk_fence_tool="/opt/KingbaseHA/corosync-qdevice/sbin/qdisk-fence-tool" pcmk_host_list=node203 \
        op monitor interval=60s \
        meta failure-timeout=5min
primitive gfs2 Filesystem \
        params device="-U dc81ac11-8871-44b1-829a-e2df8573c5d5" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
property cib-bootstrap-options: \
        have-watchdog=false \
 dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac

9、创建RAC数据库实例

[root@node203 KingbaseHA]# ./cluster_manager.sh --init_rac
init KingbaseES RAC start
create_rac_share_dir start
create_rac_share_dir success
.......
成功。您现在可以用下面的命令开启数据库服务器:
    ./sys_ctl -D /sharedata/data_gfs2/kingbase/data -l 日志文件 start
init KingbaseES RAC success

10、配置数据库DB资源

[root@node203 KingbaseHA]# ./cluster_manager.sh --config_rac_resource
crm configure DB resource start
crm configure DB resource end

查看集群资源配置信息:
如下所示,集群资源配置中增加了数据库资源DB的配置:

[root@node203 KingbaseHA]# crm config show
node 1: node203
primitive DB ocf:kingbase:kingbase \
        params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/opt/Kingbase/ES/V8/Server/log/kingbase1.log" \
        op start interval=0 timeout=120 \
        op stop interval=0 timeout=120 \
        op monitor interval=9s timeout=30 on-fail=stop \
        meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive fence_qdisk_0 stonith:fence_qdisk \
        params qdisk_path="/dev/sdh" qdisk_fence_tool="/opt/KingbaseHA/corosync-qdevice/sbin/qdisk-fence-tool" pcmk_host_list=node203 \
        op monitor interval=60s \
        meta failure-timeout=5min
primitive gfs2 Filesystem \
        params device="-U dc81ac11-8871-44b1-829a-e2df8573c5d5" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-DB DB \
        meta interleave=true target-role=Started
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2 clone-DB
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac \
        load-threshold="0%"

三、连接和访问RAC

1、集群方式启动数据库服务

[root@node203 KingbaseHA]# crm resource stop clone-DB
[root@node203 KingbaseHA]# crm resource start clone-DB
[root@node203 KingbaseHA]# crm resource status clone-DB
resource clone-DB is running on: node203

2、手工方式启动数据库

# 如下所示因license文件启动失败
[kingbase@node203 bin]$ ./sys_ctl start -D /sharedata/data_gfs2/kingbase/data
waiting for server to start....FATAL:  XX000: license.dat path is dir or file does not exist.
LOCATION:  KesMasterMain, master.c:1002
 stopped waiting
sys_ctl: could not start server
Examine the log output.

[kingbase@node203 data]$ ls -lh /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat
-rw-r--r-- 1 kingbase kingbase 2.9K Aug 19 18:02 /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat

# 将license文件拷贝到bin目录下启动数据库服务
[kingbase@node203 bin]$ cp /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat ./
[kingbase@node203 bin]$ ./sys_ctl start -D /sharedata/data_gfs2/kingbase/data
waiting for server to start....2024-08-20 11:29:19.976 CST [3354] LOG:  请尽快配置有效的归档命令做WAL日志文件的归档
2024-08-20 11:29:19.997 CST [3354] LOG:  sepapower扩展初始化完成
......
server started

[kingbase@node203 bin]$ netstat -antlp |grep 55321
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:55321           0.0.0.0:*               LISTEN      3354/kingbase
tcp6       0      0 :::55321                :::*                    LISTEN      3354/kingbase

查看集群资源:
如下所示,数据库资源DB处于startup状态:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 11:34:56 2024
  * Last change:  Tue Aug 20 10:58:11 2024 by root via cibadmin on node203
  * 1 node configured
  * 4 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * fence_qdisk_0       (stonith:fence_qdisk):   Started node203
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node203 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node203 ]
  * Clone Set: clone-DB [DB]:
    * Started: [ node203 ]

3、连接数据库访问

[kingbase@node203 bin]$ ./ksql -U system test -p 55321
Type "help" for help.

test=# \l
                              List of databases
   Name    | Owner  | Encoding |  Collate   |   Ctype    | Access privileges
-----------+--------+----------+------------+------------+-------------------
 kingbase  | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
 security  | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
 template0 | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 | =c/system        +
           |        |          |            |            | system=CTc/system
 template1 | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 | =c/system        +
           |        |          |            |            | system=CTc/system
 test      | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
(5 rows)

---如上所示,KingbaseES RAC在单节点上部署完成。

标签:RAC,gfs2,部署,node203,kingbase,案例,dlm,root,KingbaseHA
From: https://www.cnblogs.com/tiany1224/p/18382404

相关文章

  • 协同过滤在线购物系统(源码+lw+部署文档+讲解等)
    文章目录前言......
  • oracle system信息统计,​Oracle的SYSTEM和SYSAUX表空间 转载:https://blog.csdn.net
    一般情况下,业务数据应该存放在单独的数据表空间,而不应该使用系统已存在的表空间,尤其不能将业务数据保存到SYSTEM和SYSAUX表空间中,所以,DBA需要着重关注SYSTEM和SYSAUX表空间的占用情况。Oracle服务器使用SYSTEM表空间管理整个数据库。这个表空间包含系统的数据字典和关于数据库的......
  • 基于SpringBoot+Vue影城管理系统的设计和实现(源码+文档+部署讲解)
    博主介绍:全网粉丝10W+,CSDN博客专家、全栈领域优质创作者,3年JAVA全栈开发经验,专注JAVA技术、系统定制、远程指导,致力于企业数字化转型。研究方向:SpringBoot、Vue.JS、MyBatisPlus、Redis、SpringSecurity、MySQL、小程序、Android、Uniapp等。博主说明:本文项目编号......
  • 基于SpringBoot+Vue服装商城系统的设计和实现(源码+文档+部署讲解)
    博主介绍:全网粉丝10W+,CSDN博客专家、全栈领域优质创作者,3年JAVA全栈开发经验,专注JAVA技术、系统定制、远程指导,致力于企业数字化转型。研究方向:SpringBoot、Vue.JS、MyBatisPlus、Redis、SpringSecurity、MySQL、小程序、Android、Uniapp等。博主说明:本文项目编号......
  • 书生大模型实战营3期 - 进阶岛 - 6 - MindSearch 快速部署
    文章目录闯关任务完成结果闯关任务任务描述:MindSearchCPU-only版部署任务文档:MindSearchCPU-only版部署完成结果按照教程,将MindSearch部署到HuggingFace,并提供截图。新建一个目录用于存放MindSearch的相关代码,并把MindSearch仓库clone下来:mkdir-......
  • 源代码编译,Apache DolphinScheduler前后端分离部署解决方案
    转载自神龙大侠生产环境部署方案在企业线上生产环境中,普遍的做法是至少实施两套环境。测试环境线上环境测试环境用于验证代码的正确性,当测试环境验证ok后才会部署线上环境。鉴于CI/CD应用的普遍性,源代码一键部署是必要的。本文是探索对DolphinScheduler源代码改造,构建测......
  • Rust‘s “zero-cost abstraction“
    Rust's"zero-costabstraction"Iteratorsvs.ForLoopsGenericsandMonomorphizationTrait-basedAbstractionClosuresvs.FunctionPointersEnumsandPatternMatchingSmartPointers(e.g.,Box,Rc,Arc)OwnershipandBorrowingStaticDispatchvs.......
  • 地下水环境模拟:GMS技术应用与案例分析
    本文主要是地下水数值模拟软件GMS操作内容,强调三维地质结构建模、水文地质模型概化、边界条件设定、参数反演和模型校核等关键环节。通过对案例模型的实操强化,掌握地下水数值模拟软件GMS的全过程实际操作技术的基本技能,而且可以深刻理解模拟过程中的关键环节,以解决实际问题......
  • 使用xinference部署自定义embedding模型(docker)
    使用xinference部署自定义embedding模型(docker)说明:首次发表日期:2024-08-27官方文档:https://inference.readthedocs.io/zh-cn/latest/index.html使用docker部署xinferenceFROMnvcr.io/nvidia/pytorch:23.10-py3#KeepsPythonfromgenerating.pycfilesinthecontai......
  • ThinkPHP 6 + PHP7.4.3nts +nginx 使用mysql和oracle数据库
    ThinkPHP6+PHP7.4.3nts+nginx使用mysql和oracle数据库.前言业务需求,之前使用的php7.3.4nts,mysql自己写的代码,需要对接第三方系统,第三方使用的oracle数据库。之前也是各种的网查,稀里糊涂的成功了。上周五又需要对接,这次用的是php7.4.3nts,各种试了两三天不行,昨晚就突然可以......