首页 > 数据库 >使用MySQL Shell备份和还原MySQL

使用MySQL Shell备份和还原MySQL

时间:2023-07-01 21:44:40浏览次数:46  
标签:Shell 备份 sbtest1 util Source MySQL root sysbench

MySQL Shell是MySQL的高级客户端和代码编辑器。除了提供的SQL功能之外,与MySQL类似,MySQL Shell还为JavaScript和Python提供脚本功能,并包含用于使用MySQL的API。X DevAPI使用户能够处理关系型和文档数据,强烈建议MySQL Server 8.0和5.7与MySQL Shell 8.0一起使用。

MySQL Shell包含用于MySQL的实用程序。要从MySQL Shell中访问实用程序,请使用util全局对象,它在JavaScript和Python模式下可用,但在SQL模式下不可用。这些都是用来做备份的工具;来看看一些基本命令。

·util.dumpTables – 从单个数据库中导出一个或多个表
·util.dumpSchemas – 导出一个或多个数据库
·util.dumpInstance – 导出整个实例
·util.loadDump – 还原

 

1.转储表导出
下面的命令将从sysbench数据库中导出表sbtest1,并将备份存储在目标目录sysbench_dumps中。当目标目录不存在时,util.dumpTables程序将创建目录,并以csv作为转储文件的扩展名(dialect:"csv")。
缺省情况下,开启了压缩和分块(chunking)。启用分块后,dump表时将根据块大小拆分到多个文件中。默认情况下,dump文件将以tsv(Table separated value)扩展名创建。

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dialect:"csv"})
Acquiring global read lock
Global read lock acquired
Initializing - done
1 tables and 0 views will be dumped.
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Running data dump using 6 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done
Writing DDL - done
Writing table metadata - done
Starting data dump
101% (1000.00K rows / ~986.40K rows), 317.96K rows/s, 63.91 MB/s
Dump duration: 00:00:03s
Total duration: 00:00:04s
Schemas dumped: 1
Tables dumped: 1
Data size: 198.89 MB
Rows written: 1000000
Bytes written: 198.89 MB
Average throughput: 60.96 MB/s
MySQL localhost JS >

以下是执行结果:

[root@centos12 sysbench_dumps]# ls -lrth
total 190M
-rw-r-----. 1 root root 869 Jun 21 13:08 @.json
-rw-r-----. 1 root root 240 Jun 21 13:08 @.sql
-rw-r-----. 1 root root 240 Jun 21 13:08 @.post.sql
-rw-r-----. 1 root root 231 Jun 21 13:08 sysbench.json
-rw-r-----. 1 root root 638 Jun 21 13:08 [email protected]
-rw-r-----. 1 root root 474 Jun 21 13:08 sysbench.sql
-rw-r-----. 1 root root 789 Jun 21 13:08 [email protected]
-rw-r-----. 1 root root 1.5K Jun 21 13:08 [email protected]
-rw-r-----. 1 root root 190M Jun 21 13:08 [email protected]
-rw-r-----. 1 root root 233 Jun 21 13:08 @.done.json
[root@centos12 sysbench_dumps]# pwd
/home/vagrant/sysbench_dumps

其中:
@.json :转储的全部信息,包含转储参数、服务器名称,用户名,日志名以及位置信息等
@.sql, @.post.sql :server版本和转储版本信息
sysbench.json :数据库和表的信息
[email protected] :表sbtest1的详细信息,包含列名、索引、触发器、字符集和分区
sysbench.sql :创建sysbench库的语句
[email protected] :创建表sbtest1语句
@.done.json :结束时间和转储文件大小
[email protected] :表转储文件

 

2.只备份表结构
参数ddlOnly:true用于只备份表结构。
下面的命令将从sysbench数据库中获取sbtest1的表结构,并将其存储在sysbench_dumps路径中。

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, ddlOnly:true})


3.只转储表数据
参数dataOnly:true表示只转储数据。
下面的命令将从sysbench数据库中获取sbtest1的表数据,并将其存储在sysbench_dumps路径中。

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true})

 

4.转储选中的数据
选项"where": {"databasename.tablename": "condition"}用于转储符合一定条件的数据。
下面的命令就是转储id在1到10之间的数据:

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6 ,chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10"}})

在单个命令中,也可以指定多个表。语法如下:

"where" : {"databasename1.tablename1": "condition for databasename1.tablename1", "databasename2.tablename2": "condition for databasename2.tablename2"}

例如,导出表sbtest1中id在1到10,sbtest12表中id在100到110之间的数据:

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10", "sysbench.sbtest2": "id between 100 and 110"}})


5.从分区中转储数据
语法:

"partitions" : {"db1.table1": ["list of partitions"],"db2.table1": ["list of partitions"]}

例如导出sbtest1表的p1、p2分区;sbtest2表的p4、p5分区:

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "partitions" : {"sysbench.sbtest1": ["p1", "p2"],"sysbench.sbtest2": ["p4", "p5"]}})

 

6.转储数据库
转储数据库的时候,默认是将事件、触发器、routines都备份。这些内容存储在database_name.sql文件中。
例如,转储出percona、sakila数据库:

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {"compression":"none", "threads":6, chunking:false})

下面的命令是忽略events, routines, triggers:

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {events:false, routines:false, triggers:false, "compression":"none", "threads":6, chunking:false})

也可以使用下面这些选项包含、或排除events, routines, triggers:

includeEvents   : [db1.include_event1,db2.include_event2...]
includeRoutines : [db1.include_procedure1,db2.include_function2...]
includeTriggers : [db1.include_trigger1,db2.include_trigger2...]

excludeEvents   : [db1.exclude_event1,db2.exclude_event2...]
excludeTriggers : [db1.exclude_trigger1,db2.exclude_trigger2...]
excludeRoutines : [db1.exclude_procedure1,db2.exclude_function2...]

 

7.从不同的数据库中备份指定的表
有时我们可能需要从不同的数据库中获取选定的不同表。我们可以使用includeables选项来实现这一点。
语法:

includeTables:["db1.table1", "db2.table2"....]

下面的命令将从percona数据库中获取表users,并从sakila数据库中获取actor表

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {includeTables:["percona.users", "sakila.actor"], "compression":"none", "threads":6, chunking:false})

 

8.实例级别的转储
命令util.dumpInstance将整个实例的数据保存在"/backup/instance_dump"路径下。系统数据库(mysql、sys、information_schema、performance_schema)被排除在外,默认情况下,从实例中获取所有用户信息并存储在文件@.users.sql中。该用户转储文件包含所有用户的创建和授予语句。

MySQL localhost JS > util.dumpInstance("/backup/instance_dump", {"compression":"none", "threads":6, chunking:false})

其它选项:

users: false                                - Skip users dump
excludeUsers : [‘user1’,’user2’]            - Execute particular users
includeUsers : [‘user1’,’user2’].           - Include particular users
excludeSchemas : [“db1”,”db2”]              - Exclude particular schemas
includeSchemas : [“db1”,”db2”].             - Include particular schemas
excludeTables : [“db1.table1”,”db2.table2”] - Exclude particular tables
includeTables : [“db1.table1”,”db2.table2”] - Include particular tables

 

9.将数据还原到单个数据库
util.loadDump命令用与还原数据。本地加载需要开启参数local_infile
语法:

util.loadDump("/path/of/the/dump", {options})

下面的命令将数据恢复到数据库test_restore。当我们需要在不同的schema上进行恢复时,我们必须使用这个模式选项:"test_restore"。否则,它将在获取它的源schema恢复。

MySQL localhost SQL > create database test_restore;
Query OK, 1 row affected (0.3658 sec)
MySQL localhost SQL > js
Switching to JavaScript mode...
MySQL localhost JS > util.loadDump("/home/vagrant/schema_dump", {schema:"test_restore", progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true})

  

10.还原整个实例并配置复制
这里,我们只是用8个并行线程从/home/vagrant/instance_dump路径加载了完整的实例转储。

MySQL localhost JS > util.loadDump("/home/vagrant/instance_dump", {progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true})
Loading DDL and Data from '/home/vagrant/instance_dump' using 8 threads.
Opening dump...
Target is MySQL 8.0.33-25. Dump was produced from MySQL 8.0.32-24
NOTE: Load progress file detected for the instance but 'resetProgress' option was enabled. Load progress will be discarded and the whole dump will be reloaded.
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
8 thds loading / 100% (19.18 MB / 19.18 MB), 541.36 KB/s, 6 / 23 tables and partitions done
Recreating indexes - done
Executing common postamble SQL
23 chunks (100.00K rows, 19.18 MB) for 11 tables in 2 schemas were loaded in 11 sec (avg throughput 2.53 MB/s)
0 warnings were reported during the load.
MySQL localhost JS >

从文件@.json中获取日志名和日志位置,配置复制:

[root@centos12 instance_dump]# cat @.json | grep -i binlog
"binlogFile": "centos12-bin.000006",
"binlogPosition": 760871466,
[root@centos12 instance_dump]#

MySQL localhost SQL > CHANGE REPLICATION SOURCE TO SOURCE_HOST="192.168.33.12", SOURCE_USER="bhuvan",SOURCE_PASSWORD="Bhuvan@123", SOURCE_LOG_FILE='centos12-bin.000006',SOURCE_LOG_POS=760871466;
Query OK, 0 rows affected, 2 warnings (0.1762 sec)

MySQL localhost SQL > START REPLICA;
Query OK, 0 rows affected (0.1156 sec)
MySQL localhost SQL > show replica statusG
*************************** 1. row ***************************
Replica_IO_State: Waiting for source to send event
Source_Host: 192.168.33.12
Source_User: bhuvan
Source_Port: 3306
Connect_Retry: 60
Source_Log_File: centos12-bin.000006
Read_Source_Log_Pos: 823234119
Relay_Log_File: centos11-relay-bin.000002
Relay_Log_Pos: 1152129
Relay_Source_Log_File: centos12-bin.000006
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Source_Log_Pos: 762023266
Relay_Log_Space: 62363195
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Source_SSL_Allowed: No
Source_SSL_CA_File:
Source_SSL_CA_Path:
Source_SSL_Cert:
Source_SSL_Cipher:
Source_SSL_Key:
Seconds_Behind_Source: 718
Source_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Source_Server_Id: 100
Source_UUID: f46a1600-045e-11ee-809f-0800271333ce
Source_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Replica_SQL_Running_State: Waiting for dependent transaction to commit
Source_Retry_Count: 86400
Source_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Source_SSL_Crl:
Source_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Source_TLS_Version:
Source_public_key_path:
Get_Source_public_key: 0
Network_Namespace:
1 row in set (0.1470 sec)

 

标签:Shell,备份,sbtest1,util,Source,MySQL,root,sysbench
From: https://www.cnblogs.com/abclife/p/17519973.html

相关文章

  • 在 MySQL中,如何定位慢查询?(SQL调优,调优经验)
    一、什么业务场景会出现慢查询的现象 聚合查询 多表查询 表数据量过大查询 深度分页查询  表象:页面加载过慢、接口压测响应时间过长(超过1s)二、如何定位慢查询? 1、介绍以下当时产生问题的场景(我们当时的一个接口测试的时候非常的慢,压测的结果大概5秒钟) ......
  • finalshell连接超时的解决方法
    CentOS图形管理中的NetworkManager接管了网络配置,可以尝试执行以下命令恢复:systemctlstopNetworkManager临时关闭systemctldisableNetworkManager永久关闭网络管理命令systemctlstartnetwork.service开启网络服务然后在本机中再次使用ping命令,发现可以ping通虚拟机ip,这时......
  • 多端全栈项目实战:大型商业级代驾业务全流程落地SpringCloudAlibaba+Mysql+Redis+Docke
    多端全栈项目实战:大型商业级代驾业务全流程落地SpringCloudAlibaba+Mysql+Redis+Docker+Uniapp+Vue3随着移动互联网的快速发展和智能手机的普及,代驾服务成为了一个日益火热的行业。在这个行业中,如何构建一个具备商业级可靠性和扩展性的代驾业务系统成为了关键问题。本文将介绍一......
  • 部署java的linux服务器远程报错:[USM] Channel request shell failed
    问题部署很多个微服务的linux服务器ssh远程突然进不去,远程工具提示:[USM]Channelrequestshellfailed排查分析因为部署的是很多个java进程,有可能线程数占满导致系统可创建线程耗尽,排查步骤如下:使用非远程方式进入服务器,使用top-H命令查看系统创建的线程数:查看系统允许创......
  • 从mysql主从复制原理分析故障及延时场景!
    在很多的情况下生产环境所发生的问题,实际上都可以通过其工作原理来解决例如:mysql主从复制原理:  1.当用户在主库中写入数据时,将sql语句的执行写入binlog二进制文件中2.从库会生成一个i/o线程用来监听binlog日志文件的变化,若binlog文件发生变化,那么i/o线程将会提取binlog日志......
  • MySQL联合索引生效验证
    建表、添加数据,用于测试CREATETABLE`student`(`id`int(11)NOTNULLAUTO_INCREMENT,`gid`varchar(20)NOTNULL,`cid`int(11)DEFAULTNULL,`uid`int(11)DEFAULTNULL,`name`varchar(255)CHARACTERSETutf8COLLATEutf8_general_ciDEFAULTNULL,PRIMARYKEY......
  • jmeter接口测试 -- 连接数据库(MySQL)
    三个步骤一、下载MySQL的连接驱动1、先查看MySQL的版本1)服务器上查看:mysql--version 2)在连接工具上查看 2、下载连接驱动,下载地址:https://dev.mysql.com/downloads/connector/j/1)选择系统(windows系统才选择我下图中......
  • 启动mysql时报错Failed to start mysqld.service: Unit not found
    问题描述:启动mysql时报错Failedtostartmysqld.service:Unitnotfound,如下所示:数据库:mysql5.7.42系统:rhel7.31、问题重现--mysql安装过程[root@leo-mysql-master~]#yumlocalinstallhttps://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm[root@leo......
  • yum安装mysql时出现Public key for mysql-community-common-5.7.42-1.el7.x86_64.rpm
    问题描述:yum安装mysql时出现Publickeyformysql-community-common-5.7.42-1.el7.x86_64.rpmisnotinstalled告警,如下所示:数据库:mysql5.7.42系统:rhel7.31、问题重现[root@leo-mysql-master~]#yuminstall-ymysql-community-serverLoadedplugins:langpacks,product......
  • mysql 联合表查询从表即使有索引依然ALL的一个原因-索引ALL解决,字符编码方式不一致导
    mysql联合表查询从表即使有索引依然ALL的一个原因-索引ALL解决,字符编码方式不一致导致全表搜索那就是主表和从表的关联字段的编码方式不一样!!!产生的现象:解决之后,正确的使用了t2.order_noref索引,而不是ALL检查mysql主体编码方式,是否由于后来新建的表的关联字段和之前的主表的......