首页 > 数据库 >[粘贴]github-redis-rdb-cli

[粘贴]github-redis-rdb-cli

时间:2023-09-15 22:34:16浏览次数:48  
标签:github cli dump migrate redis rdb path

redis-rdb-cli

Buy Me A Coffee

A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.

Java CI Gitter Hex.pm

Chat with author

Join the chat at https://gitter.im/leonchen83/redis-rdb-cli

Contract the author

[email protected]

Binary release

binary releases

Runtime requirement

jdk 1.8+
 

Install

$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip
$ unzip redis-rdb-cli-release.zip
$ cd ./redis-rdb-cli/bin
$ ./rct -h
 

Compile requirement


jdk 1.8+
maven-3.3.1+

 

Compile & run

$ git clone https://github.com/leonchen83/redis-rdb-cli.git
$ cd redis-rdb-cli
$ mvn clean install -Dmaven.test.skip=true
$ cd target/redis-rdb-cli-release/redis-rdb-cli/bin
$ ./rct -h 
 

Run in docker

# run with jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest
$ rct -V

# run without jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native
$ rct -V
 

Build native image via graalvm in docker

$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli .
$ docker run -it redisrdbcli:redis-rdb-cli bash
$ bash-5.1# rct -V
 

Windows Environment Variables

Add /path/to/redis-rdb-cli/bin to Path environment variable

Usage

Redis mass insertion

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r
$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe
 

Convert rdb to dump format

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof
 

Convert rdb to json format

$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json
 

Numbers of key in rdb

$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv
 

Find top 50 largest keys

$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50
 

Diff rdb

$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff
$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff
$ diff /path/to/dump1.diff /path/to/dump2.diff
 

Convert rdb to RESP

$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof
 

Sync with 2 redis

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
 

Sync single redis to redis cluster

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0
 

Handle infinite loop in rst command

# set client-output-buffer-limit in source redis
$ redis-cli config set client-output-buffer-limit "slave 0 0 0"
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
 

Migrate rdb to remote redis

$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r
 

Downgrade migration

# Migrate data from redis-7 to redis-6
# About dump_rdb_version please see comment in redis-rdb-cli.conf
$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r
 

Handle big key in migration

# set proto-max-bulk-len in target redis
$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb

# set Xms Xmx in redis-rdb-cli node
$ export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g"

# execute migration
$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
 

Migrate rdb to remote redis cluster

$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r
 

or simply use following cmd without nodes-30001.conf

$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r
 

Backup remote rdb

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb
 

Backup remote rdb and convert db to dest db

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3
 

Filter rdb

$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string
 

Split rdb via cluster's nodes.conf

$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0
 

Merge multi rdb to one

$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash
 

Cut aof-use-rdb-preamble file to rdb file and aof file

$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof
 

Other parameter

More configurable parameter can be modified in /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf

Filter

  1. rctrdt and rmt these 3 commands support data filter by type,db and key RegEx(Java style).
  2. rst this command only support data filter by db.

For example:

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0
 

Monitor redis server

# step1 
# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`
# change property `metric_gateway from `none` to `influxdb`
#
# step2
$ cd /path/to/redis-rdb-cli/dashboard
$ docker-compose up -d
#
# step3
$ rmonitor -s redis://127.0.0.1:6379 -n standalone
$ rmonitor -s redis://127.0.0.1:30001 -n cluster
$ rmonitor -s redis-sentinel://sntnl-usr:[email protected]:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel
#
# step4
# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.
 

monitor

Difference between rmt and rst

  1. When rmt started. source redis first do BGSAVE and generate a snapshot rdb file. rmt command migrate this snapshot file to target redis. after this process done, rmt terminated.
  2. rst not only migrate snapshot rdb file but also incremental data from source redis. so rst never terminated except type CTRL+Crst only support db filter more details please refer to Limitation of migration

Dashboard

Since v0.1.9, the rct -f mem support showing result in grafana dashboard like the following:
memory

If you want to turn it on. you MUST install docker and docker-compose first, the installation please refer to docker
Then run the following command:

$ cd /path/to/redis-rdb-cli/dashboard

# start
$ docker-compose up -d

# stop
$ docker-compose down
 

cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parameter metric_gateway from none to influxdb.

Open http://localhost:3000 to check the rct -f mem's result.

If you deployed this tool in multi instance, you need to change parameter metric_instance to make sure unique between instances.

Redis 6

Redis 6 SSL

  1. use openssl to generate keystore
$ cd /path/to/redis-6.0-rc1
$ ./utils/gen-test-certs.sh
$ cd tests/tls
$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
 
  1. If source redis and target redis use the same keystore. then config following parameters
    source_keystore_path and target_keystore_path to point to /path/to/redis-6.0-rc1/tests/tls/redis.p12
    set source_keystore_pass and target_keystore_pass

  2. after config ssl parameters use rediss://host:port in your command to open ssl, for example: rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0

Redis 6 ACL

  1. use following URI to open redis ACL support
$ rst -s redis://user:[email protected]:6379 -m redis://user:[email protected]:6380 -r -d 0
 
  1. user MUST have +@all permission to handle commands

Hack rmt

Rmt threading model

The rmt command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.

migrate_batch_size=4096
migrate_threads=4
migrate_flush=yes
migrate_retries=1
 

The most important parameter is migrate_threads=4. this means we use the following threading model to migrate data.


single redis ----> single redis

+--------------+         +----------+     thread 1      +--------------+
|              |    +----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 2      |              |
|              |    |----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
| Source Redis |----|                                   | Target Redis |
|              |    |    +----------+     thread 3      |              |
|              |    |----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 4      |              |
|              |    +----| Endpoint |-------------------|              |
+--------------+         +----------+                   +--------------+

 

single redis ----> redis cluster

+--------------+         +----------+     thread 1      +--------------+
|              |    +----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 2      |              |
|              |    |----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
| Source Redis |----|                                   | Redis cluster|
|              |    |    +----------+     thread 3      |              |
|              |    |----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 4      |              |
|              |    +----| Endpoints|-------------------|              |
+--------------+         +----------+                   +--------------+

 

The difference between cluster migration and single migration is Endpoint and Endpoints. In cluster migration the Endpoints contains multi Endpoint to point to every master instance in cluster. For example:

3 masters 3 replicas redis cluster. if migrate_threads=4 then we have 3 * 4 = 12 connections that connected with master instance.

Migration performance

The following 3 parameters affect migration performance

migrate_batch_size=4096
migrate_retries=1
migrate_flush=yes
 
  1. migrate_batch_size: By default we use redis pipeline to migrate data to remote. the migrate_batch_size is the pipeline batch size. if migrate_batch_size=1 then the pipeline devolved into 1 single command to sent and wait the response from remote.
  2. migrate_retries: The migrate_retries=1 means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis with migrate_retries times.
  3. migrate_flush: The migrate_flush=yes means we write every 1 command to socket. then we invoke SocketOutputStream.flush() immediately. if migrate_flush=no we invoke SocketOutputStream.flush() when write to socket every 64KB. notice that this parameter also affect migrate_retries. the migrate_retries only take effect when migrate_flush=yes.

Migration principle


+---------------+             +-------------------+    restore      +---------------+ 
|               |             | redis dump format |---------------->|               |
|               |             |-------------------|    restore      |               |
|               |   convert   | redis dump format |---------------->|               |
|    Dump rdb   |------------>|-------------------|    restore      |  Targe Redis  |
|               |             | redis dump format |---------------->|               |
|               |             |-------------------|    restore      |               |
|               |             | redis dump format |---------------->|               |
+---------------+             +-------------------+                 +---------------+
 

Limitation of migration

  1. We use cluster's nodes.conf to migrate data to cluster. because of we didn't handle the MOVED ASK redirection. so limitation of cluster migration is that the cluster MUST in stable state during the migration. this means the cluster MUST have no migratingimporting slot and no switch slave to master.
  2. If use rst migrate data to cluster. the following commands not supported PUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA. and the following commands RPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTORE ONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg: del {user}:1 {user}:2)

Hack ret

What ret command do

  1. ret command that allow user define there own sink service like sink redis data to mysql or mongodb.
  2. ret command using Java SPI extension to do this job.

How to implement a sink service

User should follow the steps below to implement a sink service.

  1. create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    
    <groupId>com.your.company</groupId>
    <artifactId>your-sink-service</artifactId>
    <version>1.0.0</version>
    
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.moilioncircle</groupId>
            <artifactId>redis-rdb-cli-api</artifactId>
            <version>1.8.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>com.moilioncircle</groupId>
            <artifactId>redis-replicator</artifactId>
            <version>[3.6.4, )</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.25</version>
            <scope>provided</scope>
        </dependency>
        
        <!-- 
        <dependency>
            other dependencies
        </dependency>
        -->
        
    </dependencies>
    
    <build>
        <plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.1.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                    <encoding>${project.build.sourceEncoding}</encoding>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>
 
  1. implement SinkService interface
public class YourSinkService implements SinkService {

    @Override
    public String sink() {
        return "your-sink-service";
    }

    @Override
    public void init(File config) throws IOException {
        // parse your external sink config
    }

    @Override
    public void onEvent(Replicator replicator, Event event) {
        // your sink business
    }
}
 
  1. register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/

|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService

# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkService

your.package.YourSinkService

 
  1. package and deploy
$ mvn clean install

$ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
 
  1. run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
 
  1. debug your sink service
    public static void main(String[] args) throws Exception {
        Replicator replicator = new RedisReplicator("redis://127.0.0.1:6379");
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            Replicators.closeQuietly(replicator);
        }));
        replicator.addExceptionListener((rep, tx, e) -> {
            throw new RuntimeException(tx.getMessage(), tx);
        });
        SinkService sink = new YourSinkService();
        sink.init(new File("/path/to/your-sink.conf"));
        replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));
        replicator.open();
    }
 

How to implement a formatter service

  1. create YourFormatterService extend AbstractFormatterService
public class YourFormatterService extends AbstractFormatterService {

    @Override
    public String format() {
        return "test";
    }

    @Override
    public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {
        byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);
        getEscaper().encode(key, getOutputStream());
        getEscaper().encode(val, getOutputStream());
        getOutputStream().write('\n');
        return context;
    }
}
 
  1. register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/

|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService

# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterService

your.package.YourFormatterService

 
  1. package and deploy
$ mvn clean install

$ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
 
  1. run your formatter service
$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json
 

Contributors

Consulting

Commercial support for redis-rdb-cli is available. The following services are currently available:

  • Onsite consulting. $10,000 per day
  • Onsite training. $10,000 per day

You may also contact Baoyi Chen directly, mail to [email protected].

Supported by 宁文君

27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going. Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.

Supported by IntelliJ IDEA

IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.

标签:github,cli,dump,migrate,redis,rdb,path
From: https://www.cnblogs.com/jinanxiaolaohu/p/17706047.html

相关文章

  • 【心得】TP6 使用redis基础
    在业务场景中,我们会面对一些对于不经常更改的数据,但是会频繁访问,会对数据库造成不必要的负载,以及对于一些高并发的处理我们都需要用到缓存的技术,目前主流使用的缓存有MemChachedRedis等,当然我们也有TP框架自带的缓存。但是今天我给大家带来的是redis的基础使用。第一步安装red......
  • Redis7 10大数据类型(Redis集合)
    一、常用二、单值多value,且无重复三、案例SADDkeymember[member...]添加元素SMEMBERSkey遍历集合中的所有元素SISMEMBERkeymember判断元素是否在集合中SREMkeymember[member...]删除元素scard获取集合里面的元素个数SRANDMEMBERkey[数字]从集合中随机展现......
  • linux上安装redis保姆级教程
    1、执行下面的命令下载redis:wgethttps://download.redis.io/releases/redis-6.2.6.tar.gz 2、解压tar-zxvfredis-6.2.6.tar.gz 3、安装gccyuminstallgcc-c++makemakeinstall  4、redis默认安装位置:/usr/local/bin配置文件所在目录:安装目录/redis/redis-......
  • 本地搭建的Redis集群中实现配置DB0到DB255
    要在本地搭建的Redis集群中实现配置DB0到DB255,需要执行以下步骤:1.编辑Redis配置文件(redis.conf):使用文本编辑器打开redis.conf文件,找到以下配置项:```#Setthenumberofdatabases.ThedefaultdatabaseisDB0,youcanselect#adifferentoneonaper-connect......
  • redis批量先查缓存再查数据库
    RedisUtil:/***批量查询缓存,若是缓存没有的数据再调用对应的方法查询数据,查询之后放入缓存*@paramprefix缓存前缀*@paramparams缓存参数*@paramcolumn缓存参数对应字段列名*@paramdataBaseFunction数据库查询方法*@return......
  • ESXi esxcli 命令行创建虚拟交换机 创建虚拟网卡 配置上行链路
    ESXi  esxcli命令行创建虚拟交换机 创建虚拟网卡 配置上行链路参考整理自(    https://kb.vmware.com/s/article/1008127?lang=zh_CN   )通过ESXi/ESX中的命令行配置vSwitch或vNetworkDistributedSwitch(1008127)(vmware.com)1、创建1个虚拟交换机......
  • 【大数据OLAP技术新书推荐】 字节跳动、阿里巴巴大厂资深架构师程序员多年实践经验总
    ClickHouse领域集大成之作-ClickHouse入门进阶实战的标准参考书-日常工作案头必备!目录《ClickHouse入门、实战与进阶》简介图书评价作者简介内容简介为何写作本书本书主要特点如何阅读本书致谢全书目录目录《ClickHouse入门、实战与进阶》内容简介为何写作本书本书主要特点......
  • github官网登录不上去
    下载git后登录github网站登录不上解决方法如下:1:登录网址http://tool.chinaz.com/dns2:在DNS查询框中输入github.com检索3:找github.com的相应IP,任选一个例如13.250.176.223以查询为准4:粘贴其中一个IP例如:13.250.176.223到hosts文件中5:路径C:\Windows\System32\drivers\etc......
  • redis配置(二)
    1.redis扩展:redis服务可以使用相关程序写代码去连接服务.redis服务(可以理解为一个很屌的Socket服务端程序)________________________________________|||PHP扩展java相关客户端库调用python相关包调......
  • JavaWeb专栏之(三):Eclipse创建JavaWeb项目
    JavaWeb专栏之(三):Eclipse创建JavaWeb项目前言:关注:《遇见小Du说》微信公众号,分享更多Java知识,不负每一次相遇。更多内容请访问:www.dushunchang.top在上一篇文章中,小Du猿带大家使用Idea创建JavaWeb项目,相比之下Idea作为当前非常主流的开发IDE,深受Java后端程序员使用。市面上约......