首页 > 数据库 >mysqlsh备份恢复小记

mysqlsh备份恢复小记

时间:2023-11-30 14:49:27浏览次数:50  
标签:mysqlsh MB 备份 mysql done sbtest MySQL root 小记

参考文档:

mysqlshell 备份恢复

1、安装mysqlshell

1.1 下载地址

1.2 安装

[root@vm1 ~]# wget https://cdn.mysql.com//Downloads/MySQL-Shell/mysql-shell-8.0.35-linux-glibc2.17-x86-64bit.tar.gz
[root@vm1 ~]# tar xvf mysql-shell-8.0.35-linux-glibc2.17-x86-64bit.tar.gz -C /usr/local/
[root@vm1 ~]# mv /usr/local/mysql-shell-8.0.35-linux-glibc2.17-x86-64bit/ /usr/local/mysql-shell
[root@vm1 ~]# vi ~/.bash_profile     --添加 /usr/local/mysql-shell/bin
[root@vm1 ~]# source ~/.bash_profile
[root@vm1 ~]# mysqlsh

 MySQL  JS > \sql
Switching to SQL mode... Commands end with ;
 MySQL  SQL > 

2、写入测试数据

[root@vm1 sysbench]# sysbench --db-driver=mysql --mysql-host=127.0.0.1 --mysql-port=3380 --mysql-user=root --mysql-password=123456 --mysql-db=sbtest --table_size=250000 --tables=10 --events=0 --time=600  oltp_read_write prepare

3、mysqlsh登录数据库

[root@vm1 ~]# mysqlsh -uroot -p -h127.0.0.1
Please provide the password for '[email protected]': ******
Save password for '[email protected]'? [Y]es/[N]o/Ne[v]er (default No): 
MySQL Shell 8.2.1
 MySQL  127.0.0.1:33060+ ssl  JS > 

4、备份恢复

  • 单表导出命令格式 util.exportTable(table, outputUrl[, options])

4.1 导出,按where条件导出(默认格式tab)

MySQL  127.0.0.1:33060+ ssl  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest1.txt")
MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest1where.txt", {"where" : "id > 10000"})

4.2 导出,按where条件导出(指定格式csv)

MySQL  127.0.0.1:33060+ ssl  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest100.txt",{dialect: "csv"})
MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.exportTable("sbtest.sbtest1", "file:///root/bak/sbtest100where.txt", {"where" : "id > 10000",dialect: "csv"})

4.3 导入(默认格式):

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.importTable("file:///root/bak/sbtest1.txt", {schema: "sbtest", table: "sbtest100",  showProgress: true})
Importing from file '/root/bak/sbtest1.txt' to table `sbtest`.`sbtest100` in MySQL Server at 127.0.0.1:3380 using 1 thread
[Worker000] sbtest1.txt: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
100% (48.38 MB / 48.38 MB), 48.23 MB/s
File '/root/bak/sbtest1.txt' (48.38 MB) was imported in 1.3189 sec at 36.68 MB/s
Total rows affected in sbtest.sbtest100: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.4 导入(csv格式):

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.importTable("file:///root/bak/sbtest100.txt", {schema: "sbtest", table: "sbtest100",  showProgress: true, dialect: "csv"})
Importing from file '/root/bak/sbtest100.txt' to table `sbtest`.`sbtest100` in MySQL Server at 127.0.0.1:3380 using 1 thread
[Worker000] sbtest100.txt: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
100% (49.63 MB / 49.63 MB), 48.23 MB/s
File '/root/bak/sbtest100.txt' (49.63 MB) was imported in 1.4062 sec at 35.30 MB/s
Total rows affected in sbtest.sbtest100: Records: 250000  Deleted: 0  Skipped: 0  Warnings: 0
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.5 多线程导出导入

util.dumpInstance(outputUrl[, options]) 
util.dumpSchemas(schemas, outputUrl[, options])
util.dumpTables(schema, tables, outputUrl[, options])

4.6 备份实例

提示: 会使用 flsuh table with read lock 获取一致性位点

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpInstance("/root/bak/instance", {showProgress: true})
Util.dumpInstance: Cannot proceed with the dump, the specified directory '/root/bak/instance' already exists at the target location /root/bak/instance and is not empty. (ArgumentError)
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpInstance("/root/bak/instance", {showProgress: true})
NOTE: Backup lock is not available to the account 'root'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
6 out of 10 schemas will be dumped and within them 15 tables, 0 views.
1 out of 4 users will be dumped.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Writing users DDL
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (2.75M rows / ~2.71M rows), 1.29M rows/s, 264.73 MB/s uncompressed, 120.47 MB/s compressed                
Dump duration: 00:00:02s                                                                       
Total duration: 00:00:02s                                                                      
Schemas dumped: 6                                                                              
Tables dumped: 15                                                                              
Uncompressed data size: 532.20 MB                                                              
Compressed data size: 242.12 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 2750012                                                                          
Bytes written: 242.12 MB                                                                       
Average uncompressed throughput: 258.56 MB/s                                                   
Average compressed throughput: 117.63 MB/s                                                     
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

备份产生文件说明:

  • @.done.json:会记录备份的结束时间,备份集的大小。备份结束时生成。
  • @.json:会记录备份的一些元数据信息,包括备份时的一致性位置点信息:binlogFile,binlogPosition 和 gtidExecuted,这些信息可用来建立复制。
  • @.sql,@.post.sql:这两个文件只有一些注释信息。不过在通过 util.loadDump 导入数据时,我们可以通过这两个文件自定义一些 SQL。其中,@.sql 是数据导入前执行,@.post.sql 是数据导入后执行。
  • sbtest.json:记录 sbtest 中已经备份的表、视图、定时器、函数和存储过程。
  • *.tsv:数据文件。我们看看数据文件的内容。
  • # head -2 sbtest@[email protected] 1 6461363 68487932199-96439406143-93774651418-41631865787-96406072701-20604855487-25459966574-28203206787-41238978918-19503783441 22195207048-70116052123-74140395089-76317954521-98694025897 2 1112248 13241531885-45658403807-79170748828-69419634012-13605813761-77983377181-01582588137-21344716829-87370944992-02457486289 28733802923-10548894641-11867531929-71265603657-36546888392
  • TSV 格式,每一行储存一条记录,字段与字段之间用制表符(\t)分隔。
  • [email protected]:记录了表相关的一些元数据信息,如列名,字段之间的分隔符(fieldsTerminatedBy)等。
  • [email protected]:sbtest.sbtest1 的建表语句。
  • sbtest.sql:建库语句。如果这个库中存在存储过程、函数、定时器,也是写到这个文件中。
  • @.users.sql:创建账号及授权语句。默认不会备份 mysql.session,mysql.session,mysql.sys 这三个内部账号。

4.7 备份多个库

 MySQL  127.0.0.1:3380 ssl  sbtest  JS > util.dumpSchemas(["sbtest","ceshi"], "/root/bak/schemas",{threads:8})
NOTE: Backup lock is not available to the account 'root'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done 
2 schemas will be dumped and within them 13 tables, 0 views.
Gathering information - done 
All transactions have been started
Global read lock has been released
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
101% (2.75M rows / ~2.71M rows), 1.35M rows/s, 271.71 MB/s uncompressed, 123.68 MB/s compressed                
Dump duration: 00:00:02s                                                                       
Total duration: 00:00:02s                                                                      
Schemas dumped: 2                                                                              
Tables dumped: 13                                                                              
Uncompressed data size: 532.20 MB                                                              
Compressed data size: 242.12 MB                                                                
Compression ratio: 2.2                                                                         
Rows written: 2750008                                                                          
Bytes written: 242.12 MB                                                                       
Average uncompressed throughput: 259.92 MB/s                                                   
Average compressed throughput: 118.25 MB/s                                                     
 MySQL  127.0.0.1:3380 ssl  sbtest  JS > 

4.7 备份表结构,不锁表

 MySQL  127.0.0.1:3380 ssl  JS > util.dumpSchemas(["sbtest","ceshi"], "/root/bak/ddls",{threads:8, ddlOnly:true, consistent:false})
Initializing - done 
2 schemas will be dumped and within them 13 tables, 0 views.
Gathering information - done 
WARNING: The dumped value of gtid_executed is not guaranteed to be consistent
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done         
Writing table metadata - done         
Starting data dump
0% (0 rows / ~2.71M rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s                                                          
Total duration: 00:00:00s                                                         
Schemas dumped: 2                                                                 
Tables dumped: 13                                                                 
Uncompressed data size: 0 bytes                                                   
Compressed data size: 0 bytes                                                     
Compression ratio: 0.0                                                            
Rows written: 0                                                                   
Bytes written: 0 bytes                                                            
Average uncompressed throughput: 0.00 B/s                                         
Average compressed throughput: 0.00 B/s              

4.9 备份部分表

 MySQL  127.0.0.1:3380 ssl  JS > util.dumpSchemas(["sbtest"], "/root/bak/includeTables",{threads:8, consistent:false, includeTables:["sbtest.sbtest1"]})
Initializing - done 
1 schemas will be dumped and within them 1 out of 12 tables, 0 views.
Gathering information - done 
WARNING: The dumped value of gtid_executed is not guaranteed to be consistent
Writing global DDL files
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done       
Writing DDL - done       
Writing table metadata - done       
Starting data dump
101% (250.00K rows / ~246.67K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s                                                                    
Total duration: 00:00:00s                                                                   
Schemas dumped: 1                                                                           
Tables dumped: 1                                                                            
Uncompressed data size: 48.38 MB                                                            
Compressed data size: 22.01 MB                                                              
Compression ratio: 2.2                                                                      
Rows written: 250000                                                                        
Bytes written: 22.01 MB                                                                     
Average uncompressed throughput: 48.38 MB/s                                                 
Average compressed throughput: 22.01 MB/s                                                   
 MySQL  127.0.0.1:3380 ssl  JS > 

5、恢复数据

注意:默认导入数据时会写 binlog。如果要关闭写 binlog,需要将 skipBinlog 参数开启。

5.1恢复实例数据

 MySQL  172.17.0.2:33060+ ssl  JS > util.loadDump("/root/bak/instance", {threads: 8})
Loading DDL and Data from '/root/bak/instance' using 8 threads.
Opening dump...
Target is MySQL 8.0.35. Dump was produced from MySQL 8.0.32
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
3 thds loading \ 100% (532.20 MB / 532.20 MB), 34.93 MB/s, 12 / 15 tables done
Recreating indexes - done       
Executing common postamble SQL                                                
17 chunks (2.75M rows, 532.20 MB) for 15 tables in 6 schemas were loaded in 12 sec (avg throughput 47.29 MB/s)
0 warnings were reported during the load.                                     
 MySQL  172.17.0.2:33060+ ssl  JS > 

5.2导入之前备份的【ceshi,sbtest】库,导入时禁止写binlog

 MySQL  172.17.0.2:33060+ ssl  JS > util.loadDump("/root/bak/schemas", {threads: 8,skipBinlog: true})
Loading DDL and Data from '/root/bak/schemas' using 8 threads.
Opening dump...
Target is MySQL 8.0.35. Dump was produced from MySQL 8.0.32
Scanning metadata - done       
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done       
Executing view DDL - done       
Starting data load
2 thds loading | 100% (532.20 MB / 532.20 MB), 41.26 MB/s, 11 / 13 tables done
Executing common postamble SQL                                                
Recreating indexes - done       
14 chunks (2.75M rows, 532.20 MB) for 13 tables in 2 schemas were loaded in 10 sec (avg throughput 53.20 MB/s)
0 warnings were reported during the load.                                     
 MySQL  172.17.0.2:33060+ ssl  JS > 

标签:mysqlsh,MB,备份,mysql,done,sbtest,MySQL,root,小记
From: https://www.cnblogs.com/nanxiang/p/17867296.html

相关文章

  • QLib安装小记
    先搜索了gitee,发现没有合适的复制版本。还是取github上看readme.md提示python多个版本的支持功能(踩坑开始).似乎3.8比较合适,而我电脑上装了两个版本,3.10,3.11.为了简单起见,装aconda.创建conda环境。condainfo-e #查看环境#condacreateenv--nameqlib_envpyth......
  • sqlsever单表改名及备份
    --改名EXECsp_rename'old_table_name','new_table_name';--备份select*intodbo.t_scs_0304fromdbo.t_scs;对表结构字段进行修改:添加列:altertable表名add列名varchar(55)删除列:altertable表名dropcolumn列名改列类型:altertable表名altercolumn列名varc......
  • 其他备份
    @router.post('/openai',response_class=EventSourceResponse)asyncdefsimple_chat(user_input=Body(...,embed=True,alias='input'),chat_history=Body(...,embed=True,alias='chat_history')):app_input......
  • Android踩坑小记-在onResume中申请权限
    Android踩坑小记-在onResume中申请权限最近遇见一个问题,在onResume中申请权限,比如申请定位权限,如下所示:@OverrideprotectedvoidonResume(){super.onResume();requestPermission();}@TargetApi(Build.VERSION_CODES.M)privatevoidr......
  • 嵌入式linux主机通过分区镜像生成固件,DD备份分区后打包成固件,px30刷机教程 ,rockchip
    我这边有一个工控路由器因为刷机变砖了,网上下载不到固件,自己暂时还没有搞过编译。我找到了同型号的路由器,把它的系统制作成镜像。具体操作分为三步:第一步,直接用DD命令备份了几个分区,分区我暂时还不知道,直接用分区编号命名:ddif=/dev/mmcblk1p1of=/media/usb0/mmcblk1p1ddi......
  • Oracle数据库备份与恢复
    Oracle导入导出命令IMP/EXP.注:以下命令需在命令行执行。1EXP导出1.1完全模式:执行全数据库备份expuser/password@dbnameBUFFER=64000file=C:\full.dmpfull=Y#exp:是Oracle的导出工具,用于备份和恢复数据。#user/password:是Oracle的系统用户和密码,用于执行备份......
  • Jenkins备份、还原
    1、在新服务器上安装Jenkins和环境工具(1)新服务器上的Jenkins版本需要和旧服务器上的Jenkins版本一致(2)新服务器上安装Java、Git、Maven等工具,最好保持安装路径、参数和原服务器一致 2、备份Jenkinstar-cvf/home/jenkins.tar/var/lib/jenkins/--exclude=/var/lib/jenkins......
  • Linux 高级Shell脚本与用户管理(linux文件夹备份脚本)
    本文,我们将要学习Linux高级Shell脚本以及用户管理(重点)。下面,我们将开始探索几个使用的Shell脚本,然后介绍Linux中的用户管理。Shell脚本1创建动态目录首先,我们创建一个名为 create_directories.sh的脚本文件,这个将本将用于生成指定数量且具有动态名称的目录。以下是脚本......
  • linux 文件共享盘以及备份机制
    NFS minio  glusterFile 1.NFSNFS服务器搭建  目录NFS服务器1、NFS简介2、NFS工作机制2.1.RPC3、/etc/exports文件的格式4、实操1)手动挂载2)自动挂载autofs NFS服务器1、NFS简介NFS(NetworkFileSystem)即网络文件系统,是FreeBSD支持的文件......
  • 服务器数据库A的备份恢复到服务器B后出现问题解决
    消息10314,级别16,状态11,第2行尝试加载程序集ID65536时,Microsoft.NETFramework出错。服务器可能资源不足,或者程序集可能不受信任,PERMISSION_SET=EXTERNAL_ACCESS或UNSAFE。如上错误提示,解决办法: alterdatabasedatabasenamesettrustworthyon还有更改数据库......