达梦数据库升级和配置以及部分异常处理
背景
最近几天发现一个达梦数据库出现多次报错
后台的提示信息是:
Cause: dm.jdbc.driver.DMException: 超出全局hash join空间,适当增加HJ_BUF_GLOBAL_SIZE
感觉比较奇怪. 因为内存空间自己设置的应该是没问题的.
耗费了很长的时间进行问题整改. 所以这里想总结一下.
问题过程
自己从 达梦 8.1的如下版本 备份了数据库:
1-1-190-21.03.26-137190-ENT
然后再如下版本的数据库上面执行了恢复
03134284132-20240115-215128-20081
链接应用后一直报错:
超出全局hash join空间,适当增加HJ_BUF_GLOBAL_SIZE
一开始以为是 版本的bug 然后上 eco.dameng.Com
下载了更新的版本
03134284172-20240321-222308-20093
升级之后问题依旧.
升级方法
因为感觉版本号变化不大, 所以直接本地安装移动文件的方式进行
下载后 mount iso 到本地
安装 达梦数据库到 /tmp 路径下
注意安装选择使用 自定义模式, 只选择 1 2 3 4 不创建数据库
然后停止数据库的服务 systemctl stop DmServiceDMSERVER
然后将 /opt/dmdbms/下面的 bin 和 bin2 目录都覆盖成 tmp录下的
需要注意 原来的 bin 目录下面的DmServiceDMSERVER 需要保留
然后重启数据库就可以了
systemctl start DmServiceDMSERVER
内存参数设置问题
达梦数据库没有整体的内存配置参数
需要多个参数共同生效.
cat /opt/dmdbms/data/DAMENG/dm.ini
MAX_OS_MEMORY = 30 #发现这个参数 不是自己理解的. 内存使用量超过30%
MEMORY_POOL = 500 #单位MB 有多个.
MEMORY_N_POOLS = 1 #内存池的个数..
MEMORY_TARGET = 6144 #单位MB, 备注是共享池的大小. 感觉名不对题呢
MEMORY_EXTENT_SIZE = 32 # 怀疑是内存区块大小, 具体含义不明
MEMORY_LEAK_CHECK = 0 #Memory Pool Leak Checking Flag
MEMORY_MAGIC_CHECK = 1 #Memory Pool Magic Checking Flag
HUGEPAGE_THRESHOLD = 16 #IF not zero, try using hugepage if allocating size >= threshold * 2M
MEMORY_BAK_POOL = 4 #Memory Backup Pool Size In Megabyte
HUGE_MEMORY_PERCENTAGE = 50 #Maximum percent of HUGE buffer that can be allocated to work as common memory pool
HUGE_BUFFER = 80 #Initial Huge Buffer Size In Megabytes
HUGE_BUFFER_POOLS = 4 #number of Huge buffer pools
BUFFER = 1000 #Initial System Buffer Size In Megabytes
BUFFER_POOLS = 8 #应该是两者的成绩是 buff 区域的大小. 这样的话是 8G的buffer区域.默认值好像是19G
FAST_POOL_PAGES = 3000 #number of pages for fast pool
FAST_ROLL_PAGES = 1000 #number of pages for fast roll pages
KEEP = 8 #system KEEP buffer size in Megabytes
RECYCLE = 300 #system RECYCLE buffer size in Megabytes
RECYCLE_POOLS = 8 #回收池因该是 2.4G了
ROLLSEG = 1 #system ROLLSEG buffer size in Megabytes
ROLLSEG_POOLS = 8 #Number of rollseg buffer pools
MULTI_PAGE_GET_NUM = 1 #Maximum number of pages for each read of buffer
PRELOAD_SCAN_NUM = 0 #The number of pages scanned continuously to start preload task
PRELOAD_EXTENT_NUM = 0 #The number of clusters preloaded for the first time
SORT_BUF_SIZE = 20 #maximum sort buffer size in Megabytes
SORT_BLK_SIZE = 1 #maximum sort blk size in Megabytes
SORT_BUF_GLOBAL_SIZE = 1000 #maximum global sort buffer size in Megabytes
SORT_FLAG = 1 #choose method of sort
HAGR_HASH_SIZE = 100000 #hash table size for hagr
HJ_BUF_GLOBAL_SIZE = 10240 #maximum hash buffer size for all hash join in Megabytes
HJ_BUF_SIZE = 50 #maximum hash buffer size for single hash join in Megabytes
HJ_BLK_SIZE = 2 #hash buffer size allocated each time for hash join in Megabytes
HAGR_BUF_GLOBAL_SIZE = 5000 #maximum buffer size for all hagr in Megabytes
HAGR_BUF_SIZE = 500 #maximum buffer size for single hagr in Megabytes
HAGR_BLK_SIZE = 2 #buffer size allocated each time for hagr in Megabytes
MTAB_MEM_SIZE = 8 #memory table size in Kilobytes
FTAB_MEM_SIZE = 0 #file table package size in Kilobytes
MMT_GLOBAL_SIZE = 4000 #memory map table global size in megabytes
MMT_SIZE = 0 #memory map table size in megabytes
MMT_FLAG = 1 #ways of storing bdta data in memory map table
DICT_BUF_SIZE = 50 #dictionary buffer size in Megabytes
HFS_CACHE_SIZE = 160 #hfs cache size in Megabytes, used in huge horizon table for insert, update,delete
VM_STACK_SIZE = 256 #VM stack size in Kilobytes
VM_POOL_SIZE = 64 #VM pool size in Kilobytes
VM_POOL_TARGET = 16384 #VM pool target size in Kilobytes
SESS_POOL_SIZE = 64 #session pool size in Kilobytes
SESS_POOL_TARGET = 16384 #session pool target size in Kilobytes
RT_HEAP_TARGET = 8192 #runtime heap target size in Kilobytes
VM_MEM_HEAP = 0 #Whether to allocate memory to VM from HEAP
RFIL_RECV_BUF_SIZE = 16 #redo file recover buffer size in Megabytes
COLDATA_POOL_SIZE = 0 #coldata pool size for each worker group
HAGR_DISTINCT_HASH_TABLE_SIZE = 10000 #Size of hagr distinct hash table
CNNTB_HASH_TABLE_SIZE = 100 #Size of hash table in connect-by operation
GLOBAL_RTREE_BUF_SIZE = 100 #The total size of buffer for rtree
SINGLE_RTREE_BUF_SIZE = 10 #The size of buffer for single rtree
SORT_OPT_SIZE = 0 #once max memory size of radix sort assist count array
TSORT_OPT = 1 #minimizing memory allocation during small rowset sorting if possible
DFS_BUF_FLUSH_OPT = 0 #Whether to flush buffer page in opt mode for DFS storage
BIND_PLN_PERCENT = 30 #Maximum percent of bind plan in plan cache pool
FBACK_HASH_SIZE = 10000 #hash table size for flashback function
XBOX_MEMORY_TARGET = 1024 #Memory target size in Megabyte of XBOX system
LIKE_PATTERN_NUM = 300 #The maximum length of like pattern-matching
MEM_POOL_EXTEND_MODE = 7 #Memory pool extend policy
定期处理统计信息
systemctl restart DmServiceDMSERVER
sleep 20
cd /opt/dmdbms/bin/
./disql mydbname/[email protected] <<EOS
select id_code() ;
DBMS_STATS.GATHER_SCHEMA_STATS('mydbname',100,TRUE,'FOR ALL COLUMNS SIZE AUTO');
exit
EOS
设置计划任务
1 */12 * * * /deploy/perfdm
进行处理
标签:hash,buffer,数据库,升级,Megabytes,size,BUF,达梦,SIZE
From: https://www.cnblogs.com/jinanxiaolaohu/p/18147814