首页 > 其他分享 >分区表的导出导入测试

分区表的导出导入测试

时间:2024-02-03 19:32:55浏览次数:31  
标签:导入 p1 partition 导出 sales SALES 分区表 SQL TABLE

文档课题:分区表的导出导入测试.
数据库:oracle 19.13
系统:rhel 7.9 64位
环境介绍:单实例 + 多租户
1、数据准备
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ADTPDB                         READ WRITE NO
SQL> alter session set container=adtpdb;

Session altered.

SQL> create user sms identified by sms;

User created.

SQL> grant dba to sms;

Grant succeeded.

SQL> conn sms/sms@adtpdb
Connected.

SQL> create table sales(
  2    product_id varchar2(5),sales_count number(10,2)
  3  )
  4  partition by range(sales_count)
  5  (
  6    partition p1 values less than(1000),
  7    partition p2 values less than(2000),
  8    partition p3 values less than(3000)
  9  );

Table created.

SQL> col TABLE_OWNER for a15
SQL> col TABLE_NAME for a15
SQL> col PARTITION_NAME for a15
SQL> col high_value for a15
SQL> set line 200
SQL> select table_owner,table_name,partition_name,high_value,partition_position from dba_tab_partitions u where u.table_name='SALES'

TABLE_OWNER     TABLE_NAME      PARTITION_NAME  HIGH_VALUE      PARTITION_POSITION
--------------- --------------- --------------- --------------- ------------------
SMS             SALES           P1              1000                             1
SMS             SALES           P2              2000                             2
SMS             SALES           P3              3000                             3
SQL> insert into sales values('1',600);

1 row created.

SQL> insert into sales values('2',1000);

1 row created.

SQL> insert into sales values('3',2300);

1 row created.

SQL> alter table sales add partition p4 values less than(maxvalue);

Table altered.

SQL> insert into sales values('4',6000);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from sales partition (p1);

PRODU SALES_COUNT
----- -----------
1             600

SQL> select * from sales partition (p2);

PRODU SALES_COUNT
----- -----------
2            1000

SQL> select * from sales partition (p3);

PRODU SALES_COUNT
----- -----------
3            2300

SQL> select * from sales partition (p4);

PRODU SALES_COUNT
----- -----------
4            6000

2、导出数据
说明:现对P1分区做expdp导出.
SQL> conn sys/oracle_4U@adtpdb as sysdba
Connected.
SQL> create directory expdp_dir as '/home/oracle';

Directory created.

SQL> grant read,write on directory expdp_dir to sms;

Grant succeeded.

[oracle@ora-leo-19cs ~]$ expdp sms/sms@adtpdb directory=expdp_dir dumpfile=partition_p1.dmp tables=sales:p1 logfile=expdp_p1.log

Export: Release 19.0.0.0.0 - Production on Wed Jan 31 21:20:09 2024
Version 19.13.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SMS"."SYS_EXPORT_TABLE_01":  sms/********@adtpdb directory=expdp_dir dumpfile=partition_p1.dmp tables=sales:p1 logfile=expdp_p1.log 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SMS"."SALES":"P1"                          5.539 KB       1 rows
Master table "SMS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SMS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/partition_p1.dmp
Job "SMS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Jan 31 21:20:27 2024 elapsed 0 00:00:15

3、验证备份集
--使用如下SQL确认产生的文件为expdp备份集.
SQL> conn sms/sms@adtpdb
Connected.
SQL> set serveroutput on
SQL> declare
  2    v_filetype   NUMBER; -- 0=unknown 1=expdp 2=exp 3=ext
  3    v_info_table sys.ku$_dumpfile_info; -- PL/SQL table with file info
  4  begin
  5    dbms_datapump.get_dumpfile_info(filename   => 'partition_p1.dmp',
  6                                    directory  => upper('impdp_dir'),
  7                                    info_table => v_info_table,
  8                                    filetype   => v_filetype);
  9    dbms_output.put_line('Filetype : ' || v_filetype);
 10  end;
 11  /
Filetype : 1

4、删除表分区
说明:现将sms.sales表的P1分区删除.
SQL> conn sms/sms@adtpdb
Connected.
SQL> alter table sales drop partition p1;

Table altered.

SQL> col TABLE_OWNER for a15
SQL> col TABLE_NAME for a15
SQL> col PARTITION_NAME for a15
SQL> col high_value for a15
SQL> set line 200
SQL> select table_owner,table_name,partition_name,high_value,partition_position from dba_tab_partitions u where u.table_name='SALES';

TABLE_OWNER     TABLE_NAME      PARTITION_NAME  HIGH_VALUE      PARTITION_POSITION
--------------- --------------- --------------- --------------- ------------------
SMS             SALES           P2              2000                             1
SMS             SALES           P3              3000                             2
SMS             SALES           P4              MAXVALUE                         3

5、表分区恢复
说明:现对表分区使用impdp进行恢复.
SQL> create directory impdp_dir as '/home/oracle/dumpfile';

Directory created.

SQL> grant read,write on directory impdp_dir to sms;

Grant succeeded.

[oracle@ora-leo-19cs ~]$ mv partition_p1.dmp dumpfile
[oracle@ora-leo-19cs ~]$ ll dumpfile
total 188
-rw-r----- 1 oracle oinstall 192512 Jan 31 21:20 partition_p1.dmp

[oracle@ora-leo-19cs ~]$ vi impdp.sh
impdp sms/sms@adtpdb \
directory=impdp_dir \
remap_schema=SMS:SMS \
remap_table=sales:sales_20240203 \
dumpfile=partition_p1.dmp
logfile=impdp_sales#p1.log \
job_name=sales#p1 \
transform=disable_archive_logging:Y \
table_exists_action=append \
parallel=8

说明:若是集群环境需添加cluster=N参数.

[oracle@ora-leo-19cs ~]$ sh impdp.sh &
[oracle@ora-leo-19cs dumpfile]$ pwd
/home/oracle/dumpfile
[oracle@ora-leo-19cs dumpfile]$ ls -ltr
total 192
-rw-r----- 1 oracle oinstall 192512 Jan 31 21:20 partition_p1.dmp
-rw-r--r-- 1 oracle oinstall    988 Feb  3 17:46 impdp_sales#p1.log
 [oracle@ora-leo-19cs dumpfile]$ cat impdp_sales#p1.log
;;; 
Import: Release 19.0.0.0.0 - Production on Sat Feb 3 17:46:31 2024
Version 19.13.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "SMS"."SALES#P1" successfully loaded/unloaded
Starting "SMS"."SALES#P1":  sms/********@adtpdb directory=impdp_dir remap_schema=SMS:SMS remap_table=sales:sales_20240203 dumpfile=partition_p1.dmp logfile=impdp_sales#p1.log job_name=sales#p1 transform=disable_archive_logging:Y table_exists_action=append parallel=8 
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SMS"."SALES_20240203":"P1"                 5.539 KB       1 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SMS"."SALES#P1" successfully completed at Sat Feb 3 17:46:54 2024 elapsed 0 00:00:22

6、数据验证
[oracle@ora-leo-19cs dumpfile]$ sqlplus sms/sms@adtpdb

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Feb 3 17:49:25 2024
Version 19.13.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Sat Feb 03 2024 17:46:31 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.13.0.0.0

SQL> col table_name for a15
SQL> col partition_name for a20
SQL> col high_value for a20
SQL> set line 200
SQL> r
  1* select table_name,partition_name,high_value,partition_position from user_tab_partitions where table_name='SALES_20240203'

TABLE_NAME      PARTITION_NAME       HIGH_VALUE           PARTITION_POSITION
--------------- -------------------- -------------------- ------------------
SALES_20240203  P1                   1000                                  1
SALES_20240203  P2                   2000                                  2
SALES_20240203  P3                   3000                                  3
SALES_20240203  P4                   MAXVALUE                              4

SQL> select * from sales_20240203 partition(p1);

PRODU SALES_COUNT
----- -----------
1             600

SQL> select * from sales_20240203 partition(p2);

no rows selected

SQL> select * from sales_20240203 partition(p3);

no rows selected

SQL> select * from sales_20240203 partition(p4);

no rows selected

说明:数据成功恢复到sales_20240203临时表.

标签:导入,p1,partition,导出,sales,SALES,分区表,SQL,TABLE
From: https://blog.51cto.com/u_12991611/9570168

相关文章

  • 轻松掌握Vue:一键导出PDF和Word文档的秘诀!
    适用业务需求:将当前页面显示内容导出pdf或者word文件实现思路:先将显示内容转成图片base64地址,再生成相应文件注意:显示内容直接转图片慎用::before、::after这些css,svg图标,不然可能出现生成的图片样式丢失问题,如果确实需要显示svg图标的话目前做法是转成png显示,如有更好方法,欢迎补充......
  • 金蝶云星空协同平台导出单据类型
    需求:导出A账套的其他出库单的单据类型操作:    ......
  • Blazor中使用npm、ts、scss、webpack且自动导入到html
    1、新建一个BlazorApp项目2、新建文件夹WebLib,并在终端中打开执行指令npminit-y在WebLib目录下新建tsconfg.json文件{"compilerOptions":{"noImplicitAny":false,"noEmitOnError":true,"removeComments":false,"sourceMa......
  • sqlserver SQLServer Profiler 模板制作和导入
    SQLServerProfiler是一个基于图形界面的工具,用于监视和分析SQLServer数据库系统的活动。目录一、使用标准模板追踪数据库服务器SQL二、制作模板三、导出模板四、将模板文件导入新的客户端五、在新的客户端修改配置和使用模板 使用标准模板追踪数据库服务器SQL ......
  • Docker 导出容器到 docker-compose.yml 文件
    Docker导出容器到docker-compose.yml 文件问题:docker-compose.yml文件丢失解决办法:docker的事用docker解决背景:很多容器都是dockerrun运行的,想要转换为docker-compose.yml1.测试备份:dockerrun--rm-v/var/run/docker.sock:/var/run/docker.sockdockerproxy.com/re......
  • PostgreSQL10 内置分区表
    创建分区表的主要语法包含两部分:创建主表和创建分区。建主表语法如下:CREATETABLEtablename(...)[PARTITIONBY{RANGEILIST)({columnnameI(expression))创建主表时须指定分区方式,可选的分区方式为RANGE范围分区或LIST表分区,并指定宇段或表达式作为......
  • sql中直接导出为csv
    导出为csvselectm.SETL_IDfrom(selectSETL_IDfromychlunionselect'SETL_ID')morderbym.SETL_IDdescintooutfile'E:/xy/file0.csv'fieldsterminatedby','optionallyenclosedby'"'escapedby'"......
  • 浮木云学习日志(6)---代码导出
    随着对浮木云的了解愈加深入,不知不觉已经形成5篇随笔记录了,新进来的小伙伴想跟我一样了解浮木云的话,可直接进入官网浮木云-产品设计开发智能助手进行了解使用,如果不太清楚使用的话,可直接查看我之前的5篇小记,帮助你快速入门。之前刚介绍浮木云的时候,发现这个小众平台具有导出源代......
  • Hive数据线下导入Mysql
    1.背景     最近在处理一个数据量级在1亿左右的数据,没办法mysql不好处理,只能把数据放到大数据集群进行处理,处理好后再把这亿级数据导入本地Mysql。2.实践(1)把需要处理的数据手动传到集群,上传的数据只有一万条左右,经过笛卡尔积以及各种运算后,结果数据条数有一亿左右,文件大小......
  • 11.jmeter用户定义变量、请求头管理器、json提取器、CSV导入数据
     ......