首页 > 数据库 >两台数据库在数据写入时性能的差异

两台数据库在数据写入时性能的差异

时间:2024-05-20 09:11:08浏览次数:26  
标签:file log max 数据库 写入 th data 两台 0.0%

介绍:我有两台数据库,分别称为200和203,200和203的服务器性能配置相当,203的配置甚至还要好一点。都是安装的centos7.7,oracle 19C,均已开日志归档,这两台服务器在同一个机房,同一个网段。当我在本地使用JDBC去往这两个数据库分别插入10w条记录,每插入一条提交一次,200耗时约为220多秒,203耗时约为340多秒。耗时相差较大,请问是什么原因导致的,我该如何排查这个问题。

本人排查过程:

JAVA代码

该java代码比较简单,就是一张表有十个字段,然后插入10w条记录,每次插入自动提交(jdbc的特性)

public class JdbcBatchTest {

    public static int exec(int totalCnt) throws SQLException, ClassNotFoundException {

        String user = "xxxx";
        String password = "xxxx";  
        String driver;
        String url;
        driver = "oracle.jdbc.OracleDriver";
        url = "jdbc:oracle:thin:@xxx:1521/xxx";
        
        long l1 = System.currentTimeMillis();
        Class.forName(driver);
        Connection connection = DriverManager.getConnection(url, user, password);

        String sql = "insert into test411 values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
        PreparedStatement preparedStatement = connection.prepareStatement(sql);
        for (int i = 1; i <= totalCnt; i++) {
            preparedStatement.setInt(1, i);
            preparedStatement.setString(2, "222fda2e41e0488a983483b7e6b9cc82" + i);
            preparedStatement.setString(3, "33744f367c0549b798b841e707bcf539" + i);
            preparedStatement.setString(4, "44ae0195ec864ba78f29d62dc387c03b" + i);
            preparedStatement.setString(5, "552aeb154f684df892b700a0a561f418" + i);
            preparedStatement.setString(6, "662fda2e41e0488a983483b7e6b9cc82" + i);
            preparedStatement.setString(7, "77744f367c0549b798b841e707bcf539" + i);
            preparedStatement.setString(8, "88ae0195ec864ba78f29d62dc387c03b" + i);
            preparedStatement.setString(9, "992aeb154f684df892b700a0a561f418" + i);
            preparedStatement.setString(10, "102aeb154f684df892b700a0a561f418" + i);
            preparedStatement.executeUpdate();
        }
        connection.close();
        long l2 = System.currentTimeMillis();
        System.out.println("总条数:" + totalCnt + ",一共耗时:"+ (l2-l1) + " 毫秒");
        return 100;
    }

    public static void main(String[] args) throws SQLException, ClassNotFoundException {
        exec(100000);
    }
}

200 总条数:100000,一共耗时:225353 毫秒
203 总条数:100000,一共耗时:348713 毫秒

服务器配置

200的cpu较203要差一点,且nvme卡的配置也要差一点

200的配置:

Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz * 2

NVME卡
200 nvme.png

[root@billhost ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  3.5T  0 disk  
├─sda1        8:1    0  500M  0 part  /boot/efi
├─sda2        8:2    0   32G  0 part  [SWAP]
└─sda3        8:3    0    2T  0 part  /
nvme0n1     259:1    0  3.5T  0 disk  
└─nvme0n1p1 259:3    0  3.5T  0 part  
  └─md0       9:0    0  3.5T  0 raid1 /data
nvme1n1     259:0    0  3.5T  0 disk  
└─nvme1n1p1 259:2    0  3.5T  0 part  
  └─md0       9:0    0  3.5T  0 raid1 /data
[oracle@billhost ~]$ cat /etc/sysctl.conf 
kernel.shmmax = 82463371264
kernel.shmall = 20132659
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
vm.swappiness = 10

203的配置:

CPU Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz * 2

NVME卡
203 nvme.png

[root@scmdb ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  5.2T  0 disk  
├─sda1        8:1    0  500M  0 part  /boot/efi
├─sda2        8:2    0  500M  0 part  /boot
├─sda3        8:3    0   16G  0 part  [SWAP]
└─sda4        8:4    0  5.2T  0 part  /
sdb           8:16   0   14T  0 disk  
└─sdb1        8:17   0   14T  0 part  /data2
sdc           8:32   0  3.7T  0 disk  
└─sdc1        8:33   0  3.7T  0 part  /rmanbak
nvme0n1     259:1    0  3.7T  0 disk  
└─nvme0n1p1 259:3    0  3.7T  0 part  
  └─md0       9:0    0  3.7T  0 raid1 /data
nvme1n1     259:0    0  3.7T  0 disk  
└─nvme1n1p1 259:2    0  3.7T  0 part  
  └─md0       9:0    0  3.7T  0 raid1 /data
[root@scmdb ~]# cat /etc/sysctl.conf 
kernel.shmmax = 77309411328
kernel.core_uses_pid = 1
kernel.shmall = 60293120
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65535
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
vm.swappiness=10
vm.vfs_cache_pressure=200
vm.min_free_kbytes=409600
fs.aio-max-nr=3145728

数据库参数

200的数据库参数:

*._use_adaptive_log_file_sync='FALSE'   
*._use_single_log_writer='TRUE'   
*.audit_file_dest='/u01/app/oracle/admin/cdb200c/adump'
*.audit_trail='NONE'
*.compatible='19.0.0'
*.control_file_record_keep_time=31
*.control_files='/data/oradata/CDB200C/control01.ctl','/data/oradata/CDB200C/control02.ctl'
*.db_block_size=8192
*.db_name='cdb200c'
*.deferred_segment_creation=FALSE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=cdb200cXDB)'
*.enable_pluggable_database=true
*.log_archive_dest_1='location=/data/archivelog/cdb200c/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=2000
*.open_links_per_instance=50
*.open_links=50
*.pga_aggregate_target=12g
*.processes=1000
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=100
*.sga_target=24g
*.undo_tablespace='UNDOTBS1'

203的数据库参数:

*._use_adaptive_log_file_sync='FALSE'   
*._use_single_log_writer='TRUE'          
*.audit_file_dest='/u01/app/oracle/admin/scmdb/adump'
*.audit_trail='NONE'
*.compatible='19.0.0'
*.control_file_record_keep_time=31
*.control_files='/u02/oradata/SCMDB/control01.ctl','/u02/oradata/SCMDB/control02.ctl'
*.db_32k_cache_size=21474836480
*.db_block_size=8192
*.db_cache_size=21474836480
*.db_files=1000
*.db_name='scmdb'
*.db_unique_name='SCMDB'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=scmdbXDB)'
*.enable_pluggable_database=TRUE
*.fal_server='scmdbdg'
*.log_archive_config='DG_CONFIG=(scmdb,scmdbdg)'
*.log_archive_dest_1='LOCATION=/data2/archivelogs  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=scmdb'
*.log_archive_dest_2='SERVICE=scmdbdg ASYNC NOAFFIRM REOPEN=60 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=scmdbdg'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=2000
*.open_links=50
*.open_links_per_instance=50
*.optimizer_mode='ALL_ROWS'
*.parallel_execution_message_size=8192
*.pga_aggregate_target=12884901888
*.processes=6000
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=500
*.sga_max_size=64424509440
*.sga_target=64424509440
*.shared_pool_size=9663676416
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'

onlinelog

200的onlinelog,5组,每个大小2G

/data/oradata/CDB200C/redo04.log
/data/oradata/CDB200C/redo05.log
/data/oradata/CDB200C/redo06.log
/data/oradata/CDB200C/redo07.log
/data/oradata/CDB200C/redo08.log

203的onlinelog,8组,每个大小4G

/data/SCMDB/redo16a.log
/data/SCMDB/redo17a.log
/data/SCMDB/redo18a.log
/data/SCMDB/redo19a.log
/data/SCMDB/redo20a.log
/data/SCMDB/redo21a.log
/data/SCMDB/redo22a.log
/data/SCMDB/redo23a.log

fio测试

我测试了/data这个挂载点连续写,每次4k,总大小5G

显示200的写入性能的确不如203

200上的fio测试

[root@billhost data]# fio -filename=/data/test200 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=4k -size=5G -numjobs=1 -group_reporting -name=test200
test200: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.7
Starting 1 thread
test200: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=156MiB/s][r=0,w=40.0k IOPS][eta 00m:00s]
test200: (groupid=0, jobs=1): err= 0: pid=209451: Tue Apr 11 19:47:22 2023
  write: IOPS=38.3k, BW=150MiB/s (157MB/s)(5120MiB/34217msec)
    clat (usec): min=15, max=13616, avg=25.31, stdev=14.33
     lat (usec): min=15, max=13617, avg=25.43, stdev=14.35
    clat percentiles (nsec):
     |  1.00th=[15936],  5.00th=[16192], 10.00th=[16320], 20.00th=[16512],
     | 30.00th=[19584], 40.00th=[20096], 50.00th=[29056], 60.00th=[30080],
     | 70.00th=[30336], 80.00th=[30592], 90.00th=[31360], 95.00th=[36608],
     | 99.00th=[51968], 99.50th=[55552], 99.90th=[63232], 99.95th=[67072],
     | 99.99th=[74240]
   bw (  KiB/s): min=113872, max=165784, per=99.95%, avg=153144.62, stdev=16359.26, samples=68
   iops        : min=28468, max=41446, avg=38286.12, stdev=4089.80, samples=68
  lat (usec)   : 20=39.16%, 50=59.33%, 100=1.51%, 250=0.01%
  lat (msec)   : 20=0.01%
  cpu          : usr=6.89%, sys=45.62%, ctx=1310753, majf=0, minf=84
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1310720,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=5120MiB (5369MB), run=34217-34217msec

Disk stats (read/write):
    md0: ios=0/1307776, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/1310833, aggrmerge=0/0, aggrticks=0/15948, aggrin_queue=15819, aggrutil=50.40%
  nvme0n1: ios=0/1310833, merge=0/0, ticks=0/17416, in_queue=17274, util=50.40%
  nvme1n1: ios=0/1310833, merge=0/0, ticks=0/14480, in_queue=14365, util=41.91%

203上的fio测试

[root@scmdb data]# fio -filename=/data/test203 -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=4k -size=5G -numjobs=1 -group_reporting -name=test203
test203: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.7
Starting 1 thread
test203: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=236MiB/s][r=0,w=60.3k IOPS][eta 00m:00s]
test203: (groupid=0, jobs=1): err= 0: pid=157623: Tue Apr 11 19:29:27 2023
  write: IOPS=47.7k, BW=186MiB/s (195MB/s)(5120MiB/27478msec)
    clat (usec): min=12, max=9134, avg=20.40, stdev=18.50
     lat (usec): min=12, max=9135, avg=20.48, stdev=18.53
    clat percentiles (usec):
     |  1.00th=[   13],  5.00th=[   14], 10.00th=[   14], 20.00th=[   14],
     | 30.00th=[   14], 40.00th=[   14], 50.00th=[   14], 60.00th=[   14],
     | 70.00th=[   15], 80.00th=[   36], 90.00th=[   37], 95.00th=[   45],
     | 99.00th=[   83], 99.50th=[   93], 99.90th=[  116], 99.95th=[  130],
     | 99.99th=[  237]
   bw (  KiB/s): min=85520, max=244888, per=99.53%, avg=189898.30, stdev=66553.76, samples=54
   iops        : min=21380, max=61222, avg=47474.59, stdev=16638.47, samples=54
  lat (usec)   : 20=76.86%, 50=19.80%, 100=3.05%, 250=0.28%, 500=0.01%
  lat (usec)   : 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=4.85%, sys=42.00%, ctx=1310725, majf=0, minf=84
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1310720,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=5120MiB (5369MB), run=27478-27478msec

Disk stats (read/write):
    md0: ios=8/1305950, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=4/1310859, aggrmerge=0/1, aggrticks=0/14510, aggrin_queue=14438, aggrutil=54.42%
  nvme0n1: ios=8/1310859, merge=0/1, ticks=1/13946, in_queue=13882, util=50.38%
  nvme1n1: ios=0/1310859, merge=0/1, ticks=0/15074, in_queue=14994, util=54.42%

awr截图

从awr截图上面可以看到203的log file sync和log file parallel wirte的avg wait都要远远高于200,导致203插入10万条记录比200插入10万条记录的时间长

200的截图
200 summary.png
200 wait.png
200 sql.png

203的截图
203 summary.png
203 wait.png
203 sql.png

标签:file,log,max,数据库,写入,th,data,两台,0.0%
From: https://www.cnblogs.com/ddzj01/p/18201166

相关文章

  • Mysql - 数据库时区是客户端属性还是服务端属性
    一、说明同事问我数据库的时区是客户端属性还是服务端属性,我觉得这个问题十分有意思,之前没怎么留意,自己来做下实验。首先介绍几个术语。GMT(GreenwichMeanTime),格林尼治平均时间。UTC(CoordinatedUniversalTime),协调世界时。CST(ChinaStandardTime),中国标准时间,也称北京时间......
  • SQL Server 2012提供了多种备份和还原数据库的方法,包括以下几种:
    SQLServer2012提供了多种备份和还原数据库的方法,包括以下几种:SQLServerManagementStudio(SSMS):SSMS是一个支持图形用户界面的工具,可以通过它备份和还原整个数据库或特定的数据表、视图等。在SSMS中,可以通过右键单击数据库并选择“任务”>“备份”或“还原”来完成操作......
  • INFINI Labs 产品更新 | Easysearch 1.8.0 发布数据写入限流功能
    INFINILabs产品又更新啦~,包括Easysearchv1.8.0、Gateway、Console、Agent、Loadgenv1.25.0。本次各产品更新了很多亮点功能,如Easysearch新增数据写入限流功能,可实现节点、分片级限流;Gateway修复数据迁移过程中因消费不及时解压缩导致部分数据记录损坏而丢失记录问题,进一......
  • 数据库开发基础(JDBC)
    时间:2024-05-17日,星期五数据库开发基础(Java)课程内容JDBC快速入门、使用JDBC开发细节、连接池与JDBC进阶使用JDBC使用步骤数据库查询方法数据库写入方法SQL注入攻击的应对连接池的使用ApacheCommonsDBUtilsJDBC快速入门JDBC是Java中为了去方便使用各种......
  • 数据库中了解的知识点:视图、触发器、事务、存储过程、函数、流程控制、索引
    【视图】1什么是视图?2视图就是通过查询得到一张虚拟表,然后保存下来,下次可以直接用3其实视图也是表45为什么要用视图?6如果要频繁的操作一张虚拟表,就可以制作成视图,下次可以直接操作78如何操作9#固定语法10createview......
  • x64 环境下_findnext() 函数报错——0xC0000005: 写入位置 0xFFFFFFFFDF47C5A0 时发生
    CSDN搬家失败,手动导出markdown后再导入博客园最近在搞单目相机位姿估计,相机标定参考了【OpenCV3学习笔记】相机标定函数calibrateCamera()使用详解(附相机标定程序和数据)提供的代码。/*@paramFile_Directory为文件夹目录@paramFileType为需要查找的文件类型@param......
  • H2 数据库介绍(2)--使用
    本文主要介绍H2的基本使用,文中所使用到的软件版本:Java1.8.0_341、H22.2.224、PostgreSQL驱动42.5.5。1、嵌入式(本地)模式直接使用JDBC连接数据库即可,如果数据库不存在会自动创建。1.1、持久数据库@TestpublicvoidlocalFile()throwsSQLException{Stringd......
  • [20240511]测试问题sql语句对数据库性能影响.txt
    [20240511]测试问题sql语句对数据库性能影响.txt--//五一前遇到的问题,生产系统应用程序升级,但是3个表忘记建立,而编写的程序可能存在问题,导致频繁调用这些根本不可能执行的sql--//语句,正常情况下应该报ORA-00942:tableorviewdoesnotexist,很奇怪的同事反映应用前台根本......
  • jsoncpp按写入顺序读取
    jsoncpp按写入顺序读取在不修改jsoncpp源码的基础上,按照写入顺序读取,编写JsonValue类派生自Json::Value。jsonvalue.h#ifndefJSONVALUE_H#defineJSONVALUE_H#include<jsoncpp/json/json.h>classJsonValue:publicJson::Value{public:staticinlineJson::Val......
  • 数据库存储过程
    一.什么是存储过程存储过程是实现某个特点功能的sql语句的集合,编译后的存储过程会保存在数据中,通过存储过程的名称反复的调用执行二.存储过程的优点1.存储过程创建后可以反复调用和使用,不需要重新写复杂的语句2.创建,修改存储过程不会对数据有任何影响3.存储过程可以通过输入......