util.dumpTables():导出表
util.dumpSchemas():导出单个或多个 schema
util.dumpInstance():导出整个实例
util.dumpTables()的使用
语法:
util.dumpTables(schema, tables, outputUrl[, options])
其中:
·schema:是表所在的schema
·tables:是一个数组,表示要导出的表的列表
·outputUrl:存放的路径,该目录必须为空
比如导出单个表:
util.dumpTables('myabc',['abce'],'/data/dump')
这里将 myabc 库中的表 abce 导出到 /data/dump 目录中。
MySQL localhost JS > util.dumpTables('myabc',['abce'],'/data/dump') Acquiring global read lock Global read lock acquired Initializing - done 1 tables and 0 views will be dumped. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 100% (1.82K rows / ~1.82K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed Dump duration: 00:00:00s Total duration: 00:00:00s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size: 573.68 KB Compressed data size: 146.79 KB Compression ratio: 3.9 Rows written: 1818 Bytes written: 146.79 KB Average uncompressed throughput: 573.68 KB/s Average compressed throughput: 146.79 KB/s
以下是导出产生的文件:
缺省的并发度是4,可以手动指定并发度:
util.dumpTables('myabc',['abce'],'/data/dump',{threads:8})
导出过程可以做一些过滤,格式是使用json格式的过滤条件:
util.dumpTables('myabc',['abce'],'/data/dump', {where: {"myabc.abce":"title like 'abc%'"}})
数据加载
无论是使用util.dumpTables(), util.dumpSchemas(), 还是 util.dumpInstance()导出数据后,都是使用 util.loadDump() 加载数据。
util.loadDump('/data/dump', {schema: 'myabc-2', threads:8})
这会将上面备份的数据加载到新库 myabc-2 中。如果该数据库不存在,则会新建数据库。如果表已经存在,则会给出报错信息。
MySQL localhost JS > util.loadDump('/data/dump', {schema: 'myabc-2'}) Loading DDL and Data from '/data/dump' using 4 threads. Opening dump... Target is MySQL 8.0.36. Dump was produced from MySQL 8.0.36 Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load Executing common postamble SQL 100% (573.68 KB / 573.68 KB), 0.00 B/s, 1 / 1 tables done Recreating indexes - done 2 chunks (1.82K rows, 573.68 KB) for 1 tables in 1 schemas were loaded in 0 sec (avg throughput 573.68 KB/s) 0 warnings were reported during the load.
util.dumpSchemas()的使用
可以同时导出多个数据库:
util.dumpSchemas(schemas, outputUrl[, options])
其中:
·schemas:是一个数据库数组
导出单个 schema
util.dumpSchemas(['myabc'], '/data/dump', {threads: 4})
加载数据
util.loadDump('/data/dump', {schema: 'myabc-3', threads: 8})
这会将上面备份的数据加载到新库 myabc-3 中。如果该数据库不存在,则会新建数据库。
导出多个schema
util.dumpSchemas(['myabc', 'myabc-2'], '/data/dump', {threads: 4})
MySQL localhost JS > util.dumpSchemas(['myabc', 'myabc-2'], '/data/dump', {threads: 4}) Acquiring global read lock Global read lock acquired Initializing - done 2 schemas will be dumped and within them 3 tables, 0 views. Gathering information - done All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Writing schema metadata - done Writing DDL - done Writing table metadata - done Starting data dump 100% (8.76K rows / ~8.76K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed Dump duration: 00:00:00s Total duration: 00:00:00s Schemas dumped: 2 Tables dumped: 3 Uncompressed data size: 2.51 MB Compressed data size: 532.64 KB Compression ratio: 4.7 Rows written: 8755 Bytes written: 532.64 KB Average uncompressed throughput: 2.51 MB/s Average compressed throughput: 532.64 KB/s
加载数据
util.loadDump('/data/dump')
如果导入多个schemas,就不用指定schema选项了。但是需要确目标实例不包含相同的库或者库中不包含有相同的表。否则会报错。
MySQL localhost JS > util.loadDump('/data/dump', {threads:8}) Loading DDL and Data from '/data/dump' using 8 threads. Opening dump... Target is MySQL 8.0.36. Dump was produced from MySQL 8.0.36 Scanning metadata - done Checking for pre-existing objects... Executing common preamble SQL Executing DDL - done Executing view DDL - done Starting data load Executing common postamble SQL 100% (2.51 MB / 2.51 MB), 0.00 B/s, 3 / 3 tables done Recreating indexes - done 6 chunks (8.76K rows, 2.51 MB) for 3 tables in 2 schemas were loaded in 0 sec (avg throughput 2.51 MB/s) 0 warnings were reported during the load.
util.dumpInstance()的使用
util.dumpInstance('/data/dump', {threads:8})
不过,不会导出MySQL 的系统表(比如information_schema
, performance_schema
等)
加载数据
util.loadDump('/data/dump', {threads: 8})标签:Shell,dump,MySQL,util,done,简单,schema,data,myabc From: https://www.cnblogs.com/abclife/p/18231227