首页 > 其他分享 >OceanBase初体验之部署生产标准的三节点分布式集群

OceanBase初体验之部署生产标准的三节点分布式集群

时间:2024-03-14 16:56:51浏览次数:25  
标签:初体验 ok OceanBase ob value oceanbase port 分布式

前置条件

OceanBase 数据库集群至少由三个节点组成,所以先准备好3台服务器:

IP 配置 操作系统
x.x.x.150 Intel x86 12C 64G内存 1T SSD CentOS 7.9
x.x.x.155 Intel x86 12C 64G内存 1T SSD CentOS 7.9
x.x.x.222 Intel x86 12C 64G内存 1T SSD CentOS 7.9

关于运行 OceanBase 集群对于硬件资源和系统软件的要求,大家可以参考官方建议:

https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000508277

服务器特殊的配置项(每台都要设置):

$ vi /etc/sysctl.conf
vm.swappiness = 0
vm.max_map_count = 655360
vm.min_free_kbytes = 2097152
vm.overcommit_memory = 0
fs.file-max = 6573688

$ sysctl -p    

$ echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
$ echo never > /sys/kernel/mm/transparent_hugepage/enabled

$ systemctl disable firewalld 
$ systemctl stop firewalld

如果是使用的物理机部署,建议在BIOS中开启最大性能模式,X86芯片开启超线程。三台节点保证时间一致。

OceanBase 提供了多种部署方式,我们这里采用命令行部署,官方提供了集群管理工具OBD(俗称黑屏部署)。

下载安装包,直接上全家桶 All in One 的版本:https://www.oceanbase.com/softwarecenter

初始化中控机

在三台机器中任意挑选一台当做集群的中控机,通过OBD来操作整个集群,中控机只是用于管理集群,用单独的机器部署也可以。

把安装包上传到中控机,先把OBD装好:

[ob@localhost ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz
[ob@localhost ~]$ cd oceanbase-all-in-one/bin/
[ob@localhost ~]$ ./install.sh
[ob@localhost ~]$ source ~/.oceanbase-all-in-one/bin/env.sh

到这里集群管理工具 obd 和客户端连接工具 obclient 就都装好了。

[ob@localhost ~]$ which obd
~/.oceanbase-all-in-one/obd/usr/bin/obd
[ob@localhost ~]$ which obclient
~/.oceanbase-all-in-one/obclient/u01/obclient/bin/obclient

编写集群部署配置

oceanbase-all-in-one/conf目录下放了很多配置文件示例,可根据实际部署需要来修改。我这里要部署一套标准的 OceanBase 分布式集群,包含的组件有:

  • observer - 数据库核心服务
  • obproxy - 数据库代理,对多节点做负载均衡
  • obagent - 监控采集服务
  • grafana - 监控显示服务
  • prometheus - 监控数据存储

配置文件内容如下:

## Only need to configure when remote login is required
user:
  username: ob
  password: oceanbase
#   key_file: your ssh-key file path if need
#   port: your ssh port, default 22
#   timeout: ssh connection timeout (second), default 30
oceanbase-ce:
  servers:
    - name: server1
      # Please don't use hostname, only IP can be supported
      ip: x.x.x.222
    - name: server2
      ip: x.x.x.150
    - name: server3
      ip: x.x.x.155
  global:
    # Please set devname as the network adaptor's name whose ip is  in the setting of severs.
    # if set severs as "127.0.0.1", please set devname as "lo"
    # if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
    devname: eno1
    # if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
    memory_limit: 32G # The maximum running memory for an observer
    # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
    system_memory: 8G
    datafile_size: 50G # Size of the data file.
    log_disk_size: 20G # The size of disk space used by the clog files.
    syslog_level: INFO # System log level. The default value is INFO.
    enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
    enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
    max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
    # observer cluster name, consistent with obproxy's cluster_name
    appname: obcluster
    production_mode: false
    # root_password: # root user password, can be empty
    # proxyro_password: # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
  # In this example , support multiple ob process in single node, so different process use different ports.
  # If deploy ob cluster in multiple nodes, the port and path setting can be same.
  server1:
    mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
    rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
    #  The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
    home_path: /home/ob/deploy/observer
    # The directory for data storage. The default value is $home_path/store.
    # data_dir: /data
    # The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
    # redo_dir: /redo
    zone: zone1
  server2:
    mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
    rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
    #  The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
    home_path: /home/ob/deploy/observer
    # The directory for data storage. The default value is $home_path/store.
    # data_dir: /data
    # The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
    # redo_dir: /redo
    zone: zone2
  server3:
    mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
    rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
    #  The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
    home_path: /home/ob/deploy/observer
    # The directory for data storage. The default value is $home_path/store.
    # data_dir: /data
    # The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
    # redo_dir: /redo
    zone: zone3
obproxy-ce:
  # Set dependent components for the component.
  # When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
  depends:
    - oceanbase-ce
  servers:
    - x.x.x.222
  global:
    listen_port: 2883 # External port. The default value is 2883.
    prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
    home_path: /home/ob/deploy/obproxy
    # oceanbase root server list
    # format: ip:mysql_port;ip:mysql_port. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
    # rs_list: 192.168.1.2:2881;192.168.1.3:2881;192.168.1.4:2881
    enable_cluster_checkout: false
    # observer cluster name, consistent with oceanbase-ce's appname. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
    # cluster_name: obcluster
    skip_proxy_sys_private_check: true
    enable_strict_kernel_release: false
    # obproxy_sys_password: # obproxy sys user password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
    # observer_sys_password: # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
obagent:
  depends:
    - oceanbase-ce
  servers:
    - name: server1
      # Please don't use hostname, only IP can be supported
      ip: x.x.x.222
    - name: server2
      ip: x.x.x.150
    - name: server3
      ip: x.x.x.155
  global:
    home_path: /home/ob/deploy/obagent
    ob_monitor_status: active
prometheus:
  depends:
    - obagent
  servers:
    - x.x.x.222
  global:
    home_path: /home/ob/deploy/prometheus
grafana:
  depends:
    - prometheus
  servers:
    - x.x.x.222
  global:
    home_path: /home/ob/deploy/grafana
    login_password: oceanbase

配置文件的整体格式是按组件来配置,三个节点定义成三个server,分布在三个zone,保存了三副本数据,参数定义参考注释即可。

这里要注意几个端口,observer 对外服务是用2881,对内节点间通信是用2882,obproxy是用2883。

部署集群

准备好配置文件后部署集群就两行命令的事,先执行:

[ob@localhost ~]$ obd cluster deploy obtest -c topology.yaml

这一步会把各组件需要的文件通过shh传到各个节点上,同时创建目录、服务、给权限等等。

等最后输出下面这个信息就表示部署成功了。

接下来按提示启动集群即可:

[ob@localhost ~]$ obd cluster start obtest
Get local repositories ok
Search plugins ok
Load cluster param plugin ok
Open ssh connection ok
Check before start observer ok
Check before start obproxy ok
Check before start obagent ok
Check before start prometheus ok
Check before start grafana ok
Start observer ok
observer program health check ok
Connect to observer x.x.x.222:2881 ok
Initialize oceanbase-ce ok
Start obproxy ok
obproxy program health check ok
Connect to obproxy ok
Initialize obproxy-ce ok
Start obagent ok
obagent program health check ok
Connect to Obagent ok
Start promethues ok
prometheus program health check ok
Connect to Prometheus ok
Initialize prometheus ok
Start grafana ok
grafana program health check ok
Connect to grafana ok
Initialize grafana ok
Wait for observer init ok
+-----------------------------------------------+
|                    observer                   |
+-------------+---------+------+-------+--------+
| ip          | version | port | zone  | status |
+-------------+---------+------+-------+--------+
| x.x.x.150 | 4.2.2.0 | 2881 | zone2 | ACTIVE |
| x.x.x.155 | 4.2.2.0 | 2881 | zone3 | ACTIVE |
| x.x.x.222 | 4.2.2.0 | 2881 | zone1 | ACTIVE |
+-------------+---------+------+-------+--------+
obclient -hx.x.x.150 -P2881 -uroot -p'KHaaKw9dcLwXNvKrT3lc' -Doceanbase -A

+-----------------------------------------------+
|                    obproxy                    |
+-------------+------+-----------------+--------+
| ip          | port | prometheus_port | status |
+-------------+------+-----------------+--------+
| x.x.x.222 | 2883 | 2884            | active |
+-------------+------+-----------------+--------+
obclient -hx.x.x.222 -P2883 -uroot -p'KHaaKw9dcLwXNvKrT3lc' -Doceanbase -A

+----------------------------------------------------------------+
|                            obagent                             |
+-------------+--------------------+--------------------+--------+
| ip          | mgragent_http_port | monagent_http_port | status |
+-------------+--------------------+--------------------+--------+
| x.x.x.222 | 8089               | 8088               | active |
| x.x.x.150 | 8089               | 8088               | active |
| x.x.x.155 | 8089               | 8088               | active |
+-------------+--------------------+--------------------+--------+
+-------------------------------------------------------+
|                       prometheus                      |
+-------------------------+-------+------------+--------+
| url                     | user  | password   | status |
+-------------------------+-------+------------+--------+
| http://x.x.x.222:9090 | admin | qISoDdWHRX | active |
+-------------------------+-------+------------+--------+
+------------------------------------------------------------------+
|                             grafana                              |
+-------------------------------------+-------+-----------+--------+
| url                                 | user  | password  | status |
+-------------------------------------+-------+-----------+--------+
| http://x.x.x.222:3000/d/oceanbase | admin | oceanbase | active |
+-------------------------------------+-------+-----------+--------+
obtest running
Trace ID: 98204f6e-e1d5-11ee-b268-1c697a639d50
If you want to view detailed obd logs, please run: obd display-trace 98204f6e-e1d5-11ee-b268-1c697a639d50

还可以通过 list 和 display 命名查看集群状态:

[ob@localhost ~]$ obd cluster list
[ob@localhost ~]$ obd cluster display obtest

操作集群

前面启动集群的时候已经打印出了连接集群的方式,这里连接入口有两种。

一种是直连任意一台 observer,另一种是走负载均衡代理连接 obproxy,两种方式注意区别 ip 和端口号。另外连接工具用 obclient 或者 mysql 都可以。

[ob@localhost ~]$ obclient -hx.x.x.222 -P2883 -uroot -p'KHaaKw9dcLwXNvKrT3lc' -Doceanbase -A
Welcome to the OceanBase.  Commands end with ; or \g.
Your OceanBase connection id is 5
Server version: OceanBase_CE 4.2.2.0 (r100010012024022719-c984fe7cb7a4cef85a40323a0d073f0c9b7b8235) (Built Feb 27 2024 19:20:54)

Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

obclient [oceanbase]>

先看一下3个节点的情况:

再看三副本的分布情况:

obclient [oceanbase]> SELECT TENANT_ID,TENANT_NAME,TENANT_TYPE,PRIMARY_ZONE,LOCALITY FROM oceanbase.DBA_OB_TENANTS;
+-----------+-------------+-------------+--------------+---------------------------------------------+
| TENANT_ID | TENANT_NAME | TENANT_TYPE | PRIMARY_ZONE | LOCALITY                                    |
+-----------+-------------+-------------+--------------+---------------------------------------------+
|         1 | sys         | SYS         | RANDOM       | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 |
+-----------+-------------+-------------+--------------+---------------------------------------------+
1 row in set (0.012 sec)

LOCALITY字段记录了副本的分布,这里面FULL代表全量副本,可以支持读写和参与投票选举leader,后面是该副本分布在哪个zone。

尝试创建跨zone的资源池和租户,注意 UNIT_NUM 不能超过每个zone里的 observer 数量

obclient [oceanbase]> CREATE RESOURCE UNIT uc1 MAX_CPU 1, MEMORY_SIZE '2G', LOG_DISK_SIZE '2G';
Query OK, 0 rows affected (0.009 sec)

obclient [oceanbase]> CREATE RESOURCE POOL rp1 UNIT 'uc1', UNIT_NUM 1, ZONE_LIST ('zone1', 'zone2', 'zone3');
Query OK, 0 rows affected (0.029 sec)

obclient [oceanbase]> CREATE TENANT tt resource_pool_list=('rp1')  set ob_tcp_invited_nodes = '%';
Query OK, 0 rows affected (51.995 sec)

登录到新租户里面做一些数据操作(新租户里的root用户默认是空密码):

[ob@localhost ~]$ obclient -h10.3.72.222 -P2883 -uroot@tt  -Doceanbase -A
Welcome to the OceanBase.  Commands end with ; or \g.
Your OceanBase connection id is 59
Server version: OceanBase_CE 4.2.2.0 (r100010012024022719-c984fe7cb7a4cef85a40323a0d073f0c9b7b8235) (Built Feb 27 2024 19:20:54)

Copyright (c) 2000, 2018, OceanBase and/or its affiliates. All rights reserved.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

obclient [oceanbase]>
obclient [oceanbase]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| oceanbase          |
| test               |
+--------------------+
4 rows in set (0.002 sec)

obclient [oceanbase]> create database tt_db1;
Query OK, 1 row affected (0.069 sec)

obclient [oceanbase]> use tt_db1;
Database changed
obclient [tt_db1]> create table t1(id int primary key,name varchar(50),dt datetime);
Query OK, 0 rows affected (0.234 sec)

obclient [tt_db1]> select * from t1;
Empty set (0.022 sec)

obclient [tt_db1]> insert into t1 values(1,'aaa',now());
Query OK, 1 row affected (0.004 sec)

obclient [tt_db1]> select * from t1;
+----+------+---------------------+
| id | name | dt                  |
+----+------+---------------------+
|  1 | aaa  | 2024-03-14 16:24:00 |
+----+------+---------------------+
1 row in set (0.001 sec)

下一篇尝试从 mysql 迁移数据到 OceanBase。

标签:初体验,ok,OceanBase,ob,value,oceanbase,port,分布式
From: https://www.cnblogs.com/hohoa/p/18073243

相关文章

  • 容器集群实现多机多卡分布式微调大模型chatglm2-6b(deepseed + LLaMA + NCCL)
    环境信息2台物理机(187.135,187.136),各两张p4显卡,安装好docker=20.10.0,安装好nvidia驱动(driverversion=470.223.02,cudaversion=11.4)构造容器集群(dockerswarm187.136节点作为manager节点,187.135节点作为worker节点)[root@host-136~]#dockerswarminit--advertise-addr......
  • Hadoop大数据应用:Linux 部署 HDFS 分布式集群
    目录  一、实验1.环境2.Linux部署HDFS分布式集群3.Linux使用 HDFS文件系统二、问题1.ssh-copy-id报错2.如何禁用sshkey检测3.HDFS有哪些配置文件4.hadoop查看版本报错5.启动集群报错6.hadoop的启动和停止命令7.上传文件报错8.HDFS使用命令  ......
  • 微服务分布式springcloud研究生志愿填报辅助系统
    本文讲述了研究生志愿填报辅助系统。结合电子管理系统的特点,分析了研究生志愿填报辅助系统的背景,给出了研究生志愿填报辅助系统实现的设计方案。本论文主要完成不同用户的权限划分,不同用户具有不同权限的操作功能,在用户模块,主要有用户进行注册和登录,用户可以实现查看院校信息......
  • 一个开源的分布式在线教育系统
    大家好,我是Java陈序员。今天给大家介绍一个开源的分布式在线教育系统,支持课程在线播放、课程购买等功能。关注微信公众号:【Java陈序员】,获取开源项目分享、AI副业分享、超200本经典计算机电子书籍等。项目介绍roncoo-education——一个分布式在线教育系统。目前主要功能......
  • DiagnosticSource DiagnosticListener 无侵入式分布式跟踪
    ASP.NETCore中的框架中发出大量诊断事件,包括当前请求进入请求完成事件,HttpClient发出收到与响应,EFCore查询等等。我们可以利用DiagnosticListener来选择性地监听这些事件,然后通过自己的方式组织这些日志,实现无侵入的分布式跟踪。下面我们通过DiagnosticSource监听EFCore,与HTTP......
  • 一致性哈希算法及其在分布式系统中的应用
    摘要本文将会从实际应用场景出发,介绍一致性哈希算法(ConsistentHashing)及其在分布式系统中的应用。首先本文会描述一个在日常开发中经常会遇到的问题场景,借此介绍一致性哈希算法以及这个算法如何解决此问题;接下来会对这个算法进行相对详细的描述,并讨论一些如虚拟节点等与此算......
  • 考虑功率均分与电压频率的事件触发分布式二次控制MATLAB模型
    微❤关注“电气仔推送”获得资料(专享优惠)模型简介此模型是在《基于事件触发机制的孤岛微电网二次电压与频率协同控制MATLAB仿真模型》上进一步创作的,之前的模型只考虑了二次电压与频率控制,并没有考虑均分这一项点。因此此模型在事件触发机制的基础上,继续创作了基于事件触发......
  • Seata:实现分布式事务的利器
    Seata:实现分布式事务的利器Seata是一种开源的分布式事务解决方案,旨在解决分布式系统中的事务一致性问题。本文将介绍Seata的概念和原理,探讨其在分布式应用程序中的应用场景,并讨论其对于构建可靠的分布式系统的重要性。Seata的概念和原理分布式事务:在分布式系统中,事务的执......
  • HADOOP完全分布式搭建(饭制版)
    HADOOP完全分布式搭建(饭制版)1.虚拟机安装安装系统点击VMwareWorkstation左上角文件,新建虚拟机选择自定义,点击下一步点击下一步选择稍后安装操作系统(后续我们使用的操作系统为CentOS7),点击下一步客户机系统选择Linux,版本选择CentOS764位,点击下一步自定义安......
  • 分布式微服务 - 3.服务网关 - 4.Gateway
    分布式微服务-3.服务网关-4.Gateway项目示例:项目示例-3.服务网关-3.Gateway内容提要:基本使用:配置方式、代码方式内置断言、自定义断言内置局部过滤器、自定义内置和全局过滤器文档:官网官网文档基本使用配置方式引入依赖:使用gateway依赖时,不能同时引入sprin......