首页 > 其他分享 >LightDB分布式高可用+负载均衡部署

LightDB分布式高可用+负载均衡部署

时间:2023-07-14 10:35:25浏览次数:33  
标签:负载 LightDB CN LTHOME echo etc conf ltcluster 分布式

软件版本

LightDB 13.8-22.3

安装分布式多机单实例模式

根据LightDB安装文档6.3节, 安装分布式多机单实例模式。
安装后,确认环境变量$LTDATA, $LTHOME正确配置,工作节点正确添加。

本文假设CN(协调节点, primary)安装在机器186,两个DN(数据节点)安装在机器192193,端口均为15858。
本文之后章节介绍如何搭建CN高可用(CN standby安装在机器187),支持failover,支持CN备节点(standby)接受DML,配置LVS实现负载均衡。

在CN上查询数据节点:
ltsql -p 15858 -h 10.18.68.186

canopy@lt_test=# select nodeid,nodename,nodeport,isactive from pg_dist_node;
 nodeid |   nodename   | nodeport | isactive 
--------+--------------+----------+----------
      2 | 10.18.68.192 |    15858 | t        
      3 | 10.18.68.193 |    15858 | t        
(2 rows)

搭建CN高可用, 支持failover

在CN primary机器上操作

本实例中, 在机器186上按如下步骤操作:

  1. lt_ctl stop, 停CN实例,修改$LTDATA/lightdb.conf,在shared_preload_libraries后面加上ltcluster,如:
shared_preload_libraries='canopy,ltcluster,lt_stat_statements,lt_stat_activity,lt_prewarm,lt_cron,ltaudit,lt_hint_plan'
  1. lt_ctl start,启动CN实例,并用如下命令添加高可用组件相关信息
ltsql -p 15858 -h localhost -dpostgres -c"create extension ltcluster;"
ltsql -p 15858 -h localhost -dpostgres -c"create role ltcluster superuser password 'ltcluster' login;"
ltsql -p 15858 -h localhost -dpostgres -c"create database ltcluster owner ltcluster;"
  1. 添加用户认证信息,以便standby有权限从primary复制数据; echo后使用lt_ctl reload重新加载配置
echo  "
host  replication       ltcluster   10.18.68.0/24  trust
" >> $LTDATA/lt_hba.conf

lt_ctl reload
  1. 执行下面sh脚本,生成高可用配置文件ltcluster.conf
id=186
NODE_NAME=cn186
ip=10.18.68.186
port=15858

ltclusterconf=$LTHOME/etc/ltcluster/ltcluster.conf

echo "
node_id=$id
node_name='$NODE_NAME'
conninfo='host=$ip port=$port user=ltcluster dbname=ltcluster connect_timeout=2'
data_directory='$LTDATA'
pg_bindir='$LTHOME/bin'
failover='automatic'
promote_command='$LTHOME/bin/ltcluster standby promote -f $ltclusterconf'
follow_command='$LTHOME/bin/ltcluster standby follow -f $ltclusterconf  --upstream-node-id=%n'
restore_command='cp $LTHOME/archive/%f %p'
monitoring_history=true #(Enable monitoring parameters)
monitor_interval_secs=2 #(Define monitoring data interval write time parameter)
connection_check_type='ping'
reconnect_attempts=3 #(before failover,Number of attempts to reconnect to primary before failover(default 6))
reconnect_interval=5
standby_disconnect_on_failover =true
log_level=INFO
log_facility=STDERR
log_file='$LTHOME/etc/ltcluster/ltcluster.log'
failover_validation_command='$LTHOME/etc/ltcluster/ltcluster_failover.sh "$LTHOME" "$LTDATA"'
shutdown_check_timeout=1800
use_replication_slots=true
check_lightdb_command='$LTHOME/etc/ltcluster/check_lightdb.sh'
check_lightdb_interval=10
" > $ltclusterconf
  1. 使用如下命令注册CN主节点(primary),并检查状态
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf primary register -F
ltclusterd -d -f $LTHOME/etc/ltcluster/ltcluster.conf -p $LTHOME/etc/ltcluster/ltclusterd.pid
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf cluster show
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf service status

在机器187上操作(CN standby)

机器187将作为CN standby,进行如下操作:

  1. 把上一节生成ltcluster.conf的sh脚本修改如下, 并执行生成ltcluster.conf
# 修改ip、节点名等为187
id=187
NODE_NAME=cn187
ip=10.18.68.187
port=15858

# 后面同上一节一样
  1. 克隆CN primary,其中-h参数为primary IP。视数据量大小, 这可能需要几分钟到几个小时
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf standby clone -h 10.18.68.186 -p 15858 -U ltcluster
  1. 完成克隆后,启动数据库,注册standby,并检查状态
lt_ctl start
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf standby register -F
ltclusterd -d -f $LTHOME/etc/ltcluster/ltcluster.conf -p -f $LTHOME/etc/ltcluster/ltclusterd.pid
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf cluster show
ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf service status

示例如下,可以看到集群监控进程(ltclusterd)正在运行,集群中有一主一备,备节点的上游(upstream)为cn186。

[canopy@host187 ~]$ ltcluster -f $LTHOME/etc/ltcluster/ltcluster.conf service status
  ID | Name  | Role    | Status    | Upstream | ltclusterd | PID     | Paused? | Upstream last seen
 ----+-------+---------+-----------+----------+------------+---------+---------+--------------------
 187 | cn187 | standby |   running | cn186    | running    | 3310911 | no      | 0 second(s) ago    
 186 | cn186 | primary | * running |          | running    | 1118590 | no      | n/a                

验证CN standby支持DML

在CN主节点186上执行sql: ltsql -p 15858

create table the_table(id int, code text, price numeric(8,2));
select create_distributed_table('the_table', 'id');
insert into  the_table values (1, '1', 3.439);
insert into  the_table values (2, '2', 6.86);
select * from the_table;

在CN备节点187上执行sql: ltsql -p 15858

select * from the_table;
delete from the_table where id = 1; -- 失败
SET canopy.writable_standby_coordinator TO on; -- 设置standby支持DML, 下面的DML可成功执行
delete from the_table where id = 1;
delete from the_table where id = 2;
select * from the_table;
insert into  the_table values (3, '3', 6.86);
select * from the_table;

canopy.writable_standby_coordinator = on添加到两个CN节点的lightdb.conf,并执行lt_ctl reload,可永久有效。

部署LVS负载均衡

采用LVS DR模式做负载均衡。

首先安装ipvsadm: yum install ipvsadm, 或使用光盘中rpm包安装。

Director脚本:修改脚本前面的VIP,RIP1,RIP2,ethx(网卡,使用ifconfig查看),port变量。

#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
#   available server built on a cluster of real servers, with the load
#   balancer running on Linux.
# description: start LVS of DR

LOCK=/var/lock/ipvsadm.lock
VIP=10.19.70.166
RIP1=10.18.68.186 # CN IP
RIP2=10.18.68.187 # CN IP
ethx=enp1s0
port=15858 # CN port

. /etc/rc.d/init.d/functions

start() {
     PID=`ipvsadm -Ln | grep ${VIP} | wc -l`
     if   [ $PID -gt 0 ];
     then
           echo "The LVS-DR Server is already running !"
     else
           #Set the Virtual IP Address
           /sbin/ifconfig $ethx:1 $VIP broadcast $VIP netmask 255.255.255.255 up
           /sbin/route add -host $VIP dev $ethx:1
           #Clear IPVS Table
           /sbin/ipvsadm -C

           #Set Lvs
           #echo $VIP:$port
           #echo $RIP1:$port
           #echo $RIP2:$port
           #echo $RIP3:$port

           /sbin/ipvsadm -At $VIP:$port -s rr 
           /sbin/ipvsadm -at $VIP:$port -r $RIP1:$port -g  -w 1
           /sbin/ipvsadm -at $VIP:$port -r $RIP2:$port -g  -w 1
           #/sbin/ipvsadm -at $VIP:$port -r $RIP3:$port -g  -w 1
           /bin/touch $LOCK
           #Run Lvs
           echo "starting LVS-DR Server is ok !"       
     fi
}

stop()    {
           #clear Lvs and vip 
           /sbin/ipvsadm -C
           /sbin/route del -host $VIP dev $ethx:1
           /sbin/ifconfig $ethx:1 down >/dev/null
           rm -rf $LOCK
           echo "stopping LVS-DR server is ok !"
}

status() {
     if [ -e $LOCK ];
     then
         echo "The LVS-DR Server is already running !"
     else
         echo "The LVS-DR Server is not running !"
     fi
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart)
        stop
        start
        ;;
  status)
        status
        ;;
  *)
        echo "Usage: $1 {start|stop|restart|status}"
        exit 1
esac
exit 0

RealServer脚本: 修改脚本前面的VIP,ethx(网卡,使用ifconfig查看)变量。

#!/bin/sh
#
# Startup script handle the initialisation of LVS
# chkconfig: - 28 72
# description: Initialise the Linux Virtual Server for DR
#
### BEGIN INIT INFO
# Provides: ipvsadm
# Required-Start: $local_fs $network $named
# Required-Stop: $local_fs $remote_fs $network
# Short-Description: Initialise the Linux Virtual Server
# Description: The Linux Virtual Server is a highly scalable and highly
#   available server built on a cluster of real servers, with the load
#   balancer running on Linux.
# description: start LVS of DR-RIP
LOCK=/var/lock/ipvsadm.lock
VIP=10.19.70.166
ethx=enp1s0
. /etc/rc.d/init.d/functions
start() {
     PID=`ifconfig | grep lo:0 | wc -l`
     if [ $PID -ne 0 ];
     then
         echo "The LVS-DR-RIP Server is already running !"
     else
         /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP up
         /sbin/route add -host $VIP dev lo:0
         echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
         echo "1" >/proc/sys/net/ipv4/conf/$ethx/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/$ethx/arp_announce
         echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
         echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
         /bin/touch $LOCK
         echo "starting LVS-DR-RIP server is ok !"
     fi
}

stop() {
         /sbin/route del -host $VIP dev lo:0
         /sbin/ifconfig lo:0 down  >/dev/null
         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
         echo "0" >/proc/sys/net/ipv4/conf/$ethx/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/$ethx/arp_announce
         echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
         echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
         rm -rf $LOCK
         echo "stopping LVS-DR-RIP server is ok !"
}

status() {
     if [ -e $LOCK ];
     then
        echo "The LVS-DR-RIP Server is already running !"
     else
        echo "The LVS-DR-RIP Server is not running !"
     fi
}

case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart)
        stop
        start
        ;;
  status)
        status
        ;;
  *)
        echo "Usage: $1 {start|stop|restart|status}"
        exit 1
esac
exit 0

机器有限,186是cn primary(即RealServer),也是LVS director;187是cn standby(RealServer)。把上述Director,RealServer脚本上传至186 /etc/init.d目录,把RealServer脚本上传至187 /etc/init.d目录,并加上可执行权限chmod +x,并启动服务:

# 186
./lvs-dr start # Director脚本
./lvs-rs start # RealServer脚本

# 187
./lvs-rs start

可使用ip a看到虚拟地址是否已经加到对应的网卡上。

开多个客户端(比如ltsql)连接到VIP,在Director上使用命令ipvsadm -Ln --stats查看负载情况。

# ipvsadm -Ln --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port               Conns   InPkts  OutPkts  InBytes OutBytes
  -> RemoteAddress:Port
TCP  10.19.70.166:15858                  5       16       15     2918     5763
  -> 10.18.68.186:15858                  3        7        6     1320     2461
  -> 10.18.68.187:15858                  2        9        9     1598     3302

标签:负载,LightDB,CN,LTHOME,echo,etc,conf,ltcluster,分布式
From: https://www.cnblogs.com/faxx/p/17553037.html

相关文章

  • 使用Patroni管理LightDB高可用
    使用Patroni管理LightDB高可用测试环境CPU:海光x86OS:KylinAdvancedServerV10SP1LightDB:13.8-22.3Patroni:2.1.3etcd:3.5.4安装部署etcd集群需要3台机器。centos/RHEL等可以从epel获取etcd。麒麟ky10,ky10sp1没有etcd包,可以使用lightdb预编译的etcd-3.5.4。......
  • 31. Redis分布式锁
    我是javapub,一名Markdown程序员从......
  • 如何实现redis 分布式锁过期后,数据还存在吗?的具体操作步骤
    Redis分布式锁过期后数据是否还存在的实现一、问题描述小白在使用Redis实现分布式锁时,遇到了一个疑问:当分布式锁过期后,数据是否还存在?二、解决方案为了解决小白的问题,我们可以使用Redis的SET命令结合带有过期时间的锁来实现分布式锁的自动释放。下面是整个流程的步骤和......
  • 安装Hadoop单节点伪分布式集群
    目录安装Hadoop单节点伪分布式集群系统准备开启SSH安装JDK安装Hadoop下载准备启动伪分布式模式安装配置配饰SSH免密登录本机测试启动单节点安装YARN伪分布式集群启动与停止安装Hadoop单节点伪分布式集群操作系统:Ubuntuserver20.04参考文档:http://apache.github.io/hadoop/had......
  • 分布式多协议接入网关FluxMQ-2.0功能说明
    FluxMQ—2.0版本更新内容前言FLuxMQ是一款基于java开发,支持无限设备连接的云原生分布式物联网接入平台。FluxMQ基于Netty开发,底层采用Reactor3反应堆模型,具备低延迟,高吞吐量,千万、亿级别设备连接;方便企业快速构建其物联网平台与应用。FluxMQ官网:https://www.fluxmq.comFluxMQ......
  • lightdb plorasql supports goto command
    ArticledirectorybackgroundScenesCasepresentationnestedblocksLOOPWHILEFORCOMMITROLLBACKIFCASEEXITRETURNGOTOEXCEPTIONNULLinconclusionBackgroundTheGOTOstatementisanunconditionaljumpstatement,whichcanjumptheexecution......
  • springcloud -分布式事务解决方案 seata 分布式id生成方案
     使用三个服务来进行演示三个服务的共同部分 pom相关依赖 <!--nacos--> <dependency>   <groupId>com.alibaba.cloud</groupId>   <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!--seata-->......
  • Jmeter+Influxdb+garafana分布式压测+平台监控
    Jmeter+Influxdb+garafana分布式压测+平台监控 一、Jmeter 1、修改配置文件 主机:remote_hosts部分,修改内容为:remote_hosts=xx,xx代表的是压力机的ip:port 执行机:remote_hosts=本机ip+端口号、server_port=1099、server.rmi.localport=1099 2、主机和执行机都需下载......
  • 负载均衡算法的选择
    负载均衡算法的选择应该根据具体的应用场景和需求来确定。以下是一些常见的负载均衡算法及其适用场景:轮询(RoundRobin):适用于请求处理时间相对均匀的场景,能够实现简单的请求分配。加权轮询(WeightedRoundRobin):适用于不同后端服务器性能不同的场景,可以根据服务器的性能设置不同......
  • MATLAB代码:基于分布式优化的多产消者非合作博弈能量共
    MATLAB代码:基于分布式优化的多产消者非合作博弈能量共享关键词:分布式优化产消者非合作博弈能量共享仿真平台:matlab主要内容:为了使光伏用户群内各经济主体能实现有序的电能交易,提出了一种基于光伏电能供需比(SDR)的内部价格模型。在考虑经济性和舒适度的基础上,提出了用户参与需......