首页 > 数据库 >MySQL 逻辑备份工具 MyDumper

MySQL 逻辑备份工具 MyDumper

时间:2023-03-07 14:00:09浏览次数:40  
标签:MyDumper -- 备份 default file MySQL table data mydumper

1、依赖安装

dnf install -y cmake gcc gcc-c++ git make

2、mydumper下载和安装

# 下载
wget https://github.com/mydumper/mydumper/releases/download/v0.14.1-1/mydumper-0.14.1-1.el9.x86_64.rpm

# 安装
dnf install mydumper-0.14.1-1.el9.x86_64.rpm -y

3、mydumper参数

# mydumper --help
Usage:
  mydumper [OPTION…] multi-threaded MySQL dumping

Connection Options
  -h, --host                       The host to connect to
  -u, --user                       Username with the necessary privileges
  -p, --password                   User password
  -a, --ask-password               Prompt For User password
  -P, --port                       TCP/IP port to connect to
  -S, --socket                     UNIX domain socket file to use for connection
  -C, --compress-protocol          Use compression on the MySQL connection
  --ssl                            Connect using SSL
  --ssl-mode                       Desired security state of the connection to the server: DISABLED, PREFERRED, REQUIRED, VERIFY_CA, VERIFY_IDENTITY
  --key                            The path name to the key file
  --cert                           The path name to the certificate file
  --ca                             The path name to the certificate authority file
  --capath                         The path name to a directory that contains trusted SSL CA certificates in PEM format
  --cipher                         A list of permissible ciphers to use for SSL encryption
  --tls-version                    Which protocols the server permits for encrypted connections

Filter Options
  -x, --regex                      Regular expression for 'db.table' matching
  -B, --database                   Database to dump
  -i, --ignore-engines             Comma delimited list of storage engines to ignore
  --where                          Dump only selected records.
  -U, --updated-since              Use Update_time to dump only tables updated in the last U days
  -O, --omit-from-file             File containing a list of database.table entries to skip, one per line (skips before applying regex option)
  -T, --tables-list                Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test.t1,test.t2

Lock Options
  -z, --tidb-snapshot              Snapshot to use for TiDB
  -k, --no-locks                   Do not execute the temporary shared read lock.  WARNING: This will cause inconsistent backups
  --use-savepoints                 Use savepoints to reduce metadata locking issues, needs SUPER privilege
  --no-backup-locks                Do not use Percona backup locks
  --lock-all-tables                Use LOCK TABLE for all, instead of FTWRL
  --less-locking                   Minimize locking time on InnoDB tables.
  --trx-consistency-only           Transactional consistency only

PMM Options
  --pmm-path                       which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution
  --pmm-resolution                 which default will be high

Exec Options
  --exec-threads                   Amount of threads to use with --exec
  --exec                           Command to execute using the file as parameter
  --exec-per-thread                Set the command that will receive by STDIN and write in the STDOUT into the output file
  --exec-per-thread-extension      Set the extension for the STDOUT file when --exec-per-thread is used

If long query running found:
  --long-query-retries             Retry checking for long queries, default 0 (do not retry)
  --long-query-retry-interval      Time to wait before retrying the long query check in seconds, default 60
  -l, --long-query-guard           Set long query timer in seconds, default 60
  -K, --kill-long-queries          Kill long running queries (instead of aborting)

Job Options
  --max-rows                       Limit the number of rows per block after the table is estimated, default 1000000
  --char-deep                      
  --char-chunk                     
  -r, --rows                       Try to split tables into chunks of this many rows.
  --split-partitions               Dump partitions into separate files. This options overrides the --rows option for partitioned tables.

Checksum Options
  -M, --checksum-all               Dump checksums for all elements
  --data-checksums                 Dump table checksums with the data
  --schema-checksums               Dump schema table and view creation checksums
  --routine-checksums              Dump triggers, functions and routines checksums

Objects Options
  -m, --no-schemas                 Do not dump table schemas with the data and triggers
  -Y, --all-tablespaces            Dump all the tablespaces.
  -d, --no-data                    Do not dump table data
  -G, --triggers                   Dump triggers. By default, it do not dump triggers
  -E, --events                     Dump events. By default, it do not dump events
  -R, --routines                   Dump stored procedures and functions. By default, it do not dump stored procedures nor functions
  --views-as-tables                Export VIEWs as they were tables
  -W, --no-views                   Do not dump VIEWs

Statement Options
  --load-data                      
  --csv                            Automatically enables --load-data and set variables to export in CSV format.
  --fields-terminated-by           
  --fields-enclosed-by             
  --fields-escaped-by              Single character that is going to be used to escape characters in theLOAD DATA stament, default: '\' 
  --lines-starting-by              Adds the string at the begining of each row. When --load-data is usedit is added to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
  --lines-terminated-by            Adds the string at the end of each row. When --load-data is used it isadded to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
  --statement-terminated-by        This might never be used, unless you know what are you doing
  -N, --insert-ignore              Dump rows with INSERT IGNORE
  --replace                        Dump rows with REPLACE
  --complete-insert                Use complete INSERT statements that include column names
  --hex-blob                       Dump binary columns using hexadecimal notation
  --skip-definer                   Removes DEFINER from the CREATE statement. By default, statements are not modified
  -s, --statement-size             Attempted size of INSERT statement in bytes, default 1000000
  --tz-utc                         SET TIME_ZONE='+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable.
  --skip-tz-utc                    
  --set-names                      Sets the names, use it at your own risk, default binary

Extra Options
  -F, --chunk-filesize             Split tables into chunks of this output file size. This value is in MB
  --exit-if-broken-table-found     Exits if a broken table has been found
  --success-on-1146                Not increment error count and Warning instead of Critical in case of table doesn't exist
  -e, --build-empty-files          Build dump files even if no data available from table
  --no-check-generated-fields      Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns
  --order-by-primary               Sort the data by Primary Key or Unique key if no primary key exists
  -c, --compress                   Compress output files

Daemon Options
  -D, --daemon                     Enable daemon mode
  -I, --snapshot-interval          Interval between each dump snapshot (in minutes), requires --daemon, default 60
  -X, --snapshot-count             number of snapshots, default 2

Application Options:
  -?, --help                       Show help options
  -o, --outputdir                  Directory to output files to
  --stream                         It will stream over STDOUT once the files has been written. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given
  -L, --logfile                    Log file name to use, by default stdout is used
  --disk-limits                    Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: '<resume>:<pause>' in MB.For instance: 100:500 will pause when there is only 100MB free and willresume if 500MB are available
  -t, --threads                    Number of threads to use, default 4
  -V, --version                    Show the program version and exit
  --identifier-quote-character     This set the identifier quote character that is used to INSERT statements onlyon mydumper and to split statement on myloader. Use SQL_MODE to change theCREATE TABLE statementsPosible values are: BACKTICK and DOUBLE_QUOTE. Default: BACKTICK
  -v, --verbose                    Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
  --defaults-file                  Use a specific defaults file. Default: /etc/mydumper.cnf

4、myloader参数

# myloader --help
Usage:
  myloader [OPTION…] multi-threaded MySQL loader

Connection Options
  -h, --host                           The host to connect to
  -u, --user                           Username with the necessary privileges
  -p, --password                       User password
  -a, --ask-password                   Prompt For User password
  -P, --port                           TCP/IP port to connect to
  -S, --socket                         UNIX domain socket file to use for connection
  -C, --compress-protocol              Use compression on the MySQL connection
  --ssl                                Connect using SSL
  --ssl-mode                           Desired security state of the connection to the server: DISABLED, PREFERRED, REQUIRED, VERIFY_CA, VERIFY_IDENTITY
  --key                                The path name to the key file
  --cert                               The path name to the certificate file
  --ca                                 The path name to the certificate authority file
  --capath                             The path name to a directory that contains trusted SSL CA certificates in PEM format
  --cipher                             A list of permissible ciphers to use for SSL encryption
  --tls-version                        Which protocols the server permits for encrypted connections

Filter Options
  -x, --regex                          Regular expression for 'db.table' matching
  -s, --source-db                      Database to restore
  --skip-triggers                      Do not import triggers. By default, it imports triggers
  --skip-post                          Do not import events, stored procedures and functions. By default, it imports events, stored procedures nor functions
  --no-data                            Do not dump or import table data
  -O, --omit-from-file                 File containing a list of database.table entries to skip, one per line (skips before applying regex option)
  -T, --tables-list                    Comma delimited table list to dump (does not exclude regex option). Table name must include database name. For instance: test.t1,test.t2

PMM Options
  --pmm-path                           which default value will be /usr/local/percona/pmm2/collectors/textfile-collector/high-resolution
  --pmm-resolution                     which default will be high

Execution Options
  -e, --enable-binlog                  Enable binary logging of the restore data
  --innodb-optimize-keys               Creates the table without the indexes and it adds them at the end. Options: AFTER_IMPORT_PER_TABLE and AFTER_IMPORT_ALL_TABLES. Default: AFTER_IMPORT_PER_TABLE
  --purge-mode                         This specify the truncate mode which can be: NONE, DROP, TRUNCATE and DELETE
  --disable-redo-log                   Disables the REDO_LOG and enables it after, doesn't check initial status
  -o, --overwrite-tables               Drop tables if they already exist
  --serialized-table-creation          Table recreation will be executed in series, one thread at a time
  --stream                             It will receive the stream from STDIN and creates the file in the disk before start processing. Since v0.12.7-1, accepts NO_DELETE, NO_STREAM_AND_NO_DELETE and TRADITIONAL which is the default value and used if no parameter is given

Threads Options
  --max-threads-per-table              Maximum number of threads per table to use, default 4
  --max-threads-for-index-creation     Maximum number of threads for index creation, default 4

Statement Options
  -r, --rows                           Split the INSERT statement into this many rows.
  -q, --queries-per-transaction        Number of queries per transaction, default 1000
  --append-if-not-exist                Appends IF NOT EXISTS to the create table statements. This will be removed when https://bugs.mysql.com/bug.php?id=103791 has been implemented
  --set-names                          Sets the names, use it at your own risk, default binary
  --skip-definer                       Removes DEFINER from the CREATE statement. By default, statements are not modified

Application Options:
  -?, --help                           Show help options
  -d, --directory                      Directory of the dump to import
  -L, --logfile                        Log file name to use, by default stdout is used
  -B, --database                       An alternative database to restore into
  --resume                             Expect to find resume file in backup dir and will only process those files
  -t, --threads                        Number of threads to use, default 4
  -V, --version                        Show the program version and exit
  --identifier-quote-character         This set the identifier quote character that is used to INSERT statements onlyon mydumper and to split statement on myloader. Use SQL_MODE to change theCREATE TABLE statementsPosible values are: BACKTICK and DOUBLE_QUOTE. Default: BACKTICK
  -v, --verbose                        Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
  --defaults-file                      Use a specific defaults file. Default: /etc/mydumper.cnf

5、mydumper使用

# 备份全部数据库 
mydumper -u root -p rootroot -o /data/backup/all_db/

# 备份全部数据库 包含触发器、事件、存储过程及函数
mydumper -u root -p rootroot -G -R -E -o /data/backup/all/

# 备份指定库
mydumper -u root -p rootroot -G -R -E -B demo -o /data/backup/demo/

# 使用正则 排除系统库
mydumper -u root -p rootroot -G -R -E --regex '^(?!(mysql|sys))' -o /data/backup/db

# 备份指定表
mydumper -u root -p rootroot -B demo -T table1,table2 -o /data/backup/table/

# 只备份表结构
mydumper -u root -p rootroot -d -B demo -o /data/backup/nodata/

# 只备份表数据
mydumper -u root -p rootroot -m -B demo -o /data/backup/noschema/

# 压缩备份某个表
mydumper -u root -p rootroot -B demo -T table1 -c -o /data/backup/compress/

6、myload使用

# 恢复全部备份文件(若表已存在则先删除)
myloader -u root -p rootroot -o -d /data/backup/all

# 从全备中恢复指定库
myloader -u root -p rootroot -s demo -o -d /data/backup/demo

# 将某个数据库备份还原到另一个数据库中(目标库不存在则会新建)
myloader -u root -p rootroot -B recover_demo -s demo -o -d /data/backup/demo
myloader -u root -p rootroot -B recover_demo -o -d /data/backup/demo

# 恢复时开启binlog(有备库的时候需要开启)
myloader -u root -p rootroot -e -o -d /data/backup/demo

# 无法还原单表,只能进入命令行执行还原
source demo.table1-schema.sql	#还原表结构
source demo.table1.sql	#还原表数据

参考:https://github.com/mydumper/mydumper

标签:MyDumper,--,备份,default,file,MySQL,table,data,mydumper
From: https://www.cnblogs.com/a120608yby/p/17187918.html

相关文章

  • Linux-MySQL 数据备份
    MySQL数据备份是一个非常重要的工作,保证数据的安全性和可靠性。常用方法:一、使用mysqldump工具来备份MySQL数据库。该工具可以生成SQL脚本文件,包含数据库中......
  • PostgreSQL定时备份策略
    ::适用于windows系统:://///////////////////////////////////////////////////////////////////////////::在pg服务器上创建备份脚本bakup.bat,以pg数据库为例::然后在w......
  • Linux备份与恢复使用技巧
    在日常的Linux系统运维中,备份和恢复是非常重要的操作,能够有效避免数据丢失和系统故障导致的影响。本文将介绍一些关于Linux系统备份和恢复的实用技巧,帮助管理员更好地管理......
  • Mysql 主从复制和 GTID 复制
    1、安装主Mysql优化命令创建数据和日志存储目录1)安装Mysql​root@centos05~]#tarzxfmysql-8.0.32-el7-x86_64.tar.gz-C/usr/src/[root@centos05~]#mv/usr/src/m......
  • MySQL 安装过程中踩过的坑
    1、用 grep'temporarypassword'/var/log/mysqld.log生成的初始密码老提示密码错误,只能直接发大招:       A、vi/etc/my.cnf在文件的[mysqld]内增加一行 ......
  • mysql 0点 弹窗 取消
         ......
  • MySQL中这14个必备神器,用过都说好
    前言:最近几年用MYSQL数据库挺多的,发现了一些非常有用的小玩意,今天拿出来分享到大家,希望对你会有所帮助。1.group_concat 在我们平常的工作中,使用groupby......
  • mysql中 Char 和 varchar 的区别?
    1、char的长度是固定不变,而varchar的长度是可变的例如值:abc类型char(10),存储值为:abc_______(abc+7个空格)类型varchar(10),存储值为:abc(自动缩短为3个字母的长......
  • MySQL创建函数报错:1418 - This function has none of DETERMINISTIC, NO SQL, or READ
    创建函数时报错:1418-ThisfunctionhasnoneofDETERMINISTIC,NOSQL,orREADSSQLDATAinitsdeclarationandbinaryloggingisenabled(you*might*wantto......
  • 牛客网 Mysql【入门】
    牛客网Mysql【入门】如果select语句同时包含有groupby,having,limit,orderby那么他们的顺序是:where(限制属性)groupby(分组)having(筛选)orderby(排序)limit(分页【......