首页 > 数据库 >[粘贴]github-redis-rdb-cli

[粘贴]github-redis-rdb-cli

时间:2023-10-25 12:32:57浏览次数:29  
标签:github cli dump migrate redis rdb path

redis-rdb-cli

A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.

[粘贴]github-redis-rdb-cli_Redis

  

Chat with author

Contract the author

Binary release

binary releases

Runtime requirement

jdk 1.8+

 

Install

$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip
$ unzip redis-rdb-cli-release.zip
$ cd ./redis-rdb-cli/bin
$ ./rct -h

 

Compile requirement

jdk 1.8+
maven-3.3.1+

 

Compile & run

$ git clone https://github.com/leonchen83/redis-rdb-cli.git
$ cd redis-rdb-cli
$ mvn clean install -Dmaven.test.skip=true
$ cd target/redis-rdb-cli-release/redis-rdb-cli/bin
$ ./rct -h

 

Run in docker

# run with jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest
$ rct -V

# run without jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native
$ rct -V

 

Build native image via graalvm in docker

$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli .
$ docker run -it redisrdbcli:redis-rdb-cli bash
$ bash-5.1# rct -V

 

Windows Environment Variables

Add /path/to/redis-rdb-cli/bin to Path environment variable

Usage

Redis mass insertion

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r
$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe

 

Convert rdb to dump format

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof

 

Convert rdb to json format

$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json

 

Numbers of key in rdb

$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv

 

Find top 50 largest keys

$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50

 

Diff rdb

$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff
$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff
$ diff /path/to/dump1.diff /path/to/dump2.diff

 

Convert rdb to RESP

$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof

 

Sync with 2 redis

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

 

Sync single redis to redis cluster

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0

 

Handle infinite loop in rst command

# set client-output-buffer-limit in source redis
$ redis-cli config set client-output-buffer-limit "slave 0 0 0"
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

 

Migrate rdb to remote redis

$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r

 

Downgrade migration

# Migrate data from redis-7 to redis-6
# About dump_rdb_version please see comment in redis-rdb-cli.conf
$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r

 

Handle big key in migration

# set proto-max-bulk-len in target redis
$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb

# set Xms Xmx in redis-rdb-cli node
$ export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g"

# execute migration
$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

 

Migrate rdb to remote redis cluster

$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r

 

or simply use following cmd without nodes-30001.conf

$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r

 

Backup remote rdb

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb

 

Backup remote rdb and convert db to dest db

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3

 

Filter rdb

$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string

 

Split rdb via cluster's nodes.conf

$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0

 

Merge multi rdb to one

$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash

 

Cut aof-use-rdb-preamble file to rdb file and aof file

$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof

 

Other parameter

More configurable parameter can be modified in /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf

Filter

  1. rctrdt and rmt these 3 commands support data filter by type,db and key RegEx(Java style).
  2. rst this command only support data filter by db.

For example:

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0

 

Monitor redis server

# step1 
# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`
# change property `metric_gateway from `none` to `influxdb`
#
# step2
$ cd /path/to/redis-rdb-cli/dashboard
$ docker-compose up -d
#
# step3
$ rmonitor -s redis://127.0.0.1:6379 -n standalone
$ rmonitor -s redis://127.0.0.1:30001 -n cluster
$ rmonitor -s redis-sentinel://sntnl-usr:[email protected]:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel
#
# step4
# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.

 

Difference between rmt and rst

  1. When rmt started. source redis first do BGSAVE and generate a snapshot rdb file. rmt command migrate this snapshot file to target redis. after this process done, rmt terminated.
  2. rst not only migrate snapshot rdb file but also incremental data from source redis. so rst never terminated except type CTRL+Crst only support db filter more details please refer to Limitation of migration

Dashboard

Since v0.1.9, the rct -f mem support showing result in grafana dashboard like the following:

If you want to turn it on. you MUST install docker and docker-compose first, the installation please refer to dockerThen run the following command:

$ cd /path/to/redis-rdb-cli/dashboard

# start
$ docker-compose up -d

# stop
$ docker-compose down

 

cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.confThen change parameter metric_gateway from none to influxdb.

Open http://localhost:3000 to check the rct -f mem's result.

If you deployed this tool in multi instance, you need to change parameter metric_instance to make sure unique between instances.

Redis 6

Redis 6 SSL

  1. use openssl to generate keystore
$ cd /path/to/redis-6.0-rc1
$ ./utils/gen-test-certs.sh
$ cd tests/tls
$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12

 

  1. If source redis and target redis use the same keystore. then config following parameters
    source_keystore_path and target_keystore_path to point to /path/to/redis-6.0-rc1/tests/tls/redis.p12set source_keystore_pass and target_keystore_pass
  2. after config ssl parameters use rediss://host:port in your command to open ssl, for example: rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0

Redis 6 ACL

  1. use following URI to open redis ACL support
$ rst -s redis://user:[email protected]:6379 -m redis://user:[email protected]:6380 -r -d 0

 

  1. user MUST have +@all permission to handle commands

Hack rmt

Rmt threading model

The rmt command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.

migrate_batch_size=4096
migrate_threads=4
migrate_flush=yes
migrate_retries=1

 

The most important parameter is migrate_threads=4. this means we use the following threading model to migrate data.

single redis ----> single redis

+--------------+         +----------+     thread 1      +--------------+
|              |    +----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 2      |              |
|              |    |----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
| Source Redis |----|                                   | Target Redis |
|              |    |    +----------+     thread 3      |              |
|              |    |----| Endpoint |-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 4      |              |
|              |    +----| Endpoint |-------------------|              |
+--------------+         +----------+                   +--------------+

 

single redis ----> redis cluster

+--------------+         +----------+     thread 1      +--------------+
|              |    +----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 2      |              |
|              |    |----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
| Source Redis |----|                                   | Redis cluster|
|              |    |    +----------+     thread 3      |              |
|              |    |----| Endpoints|-------------------|              |
|              |    |    +----------+                   |              |
|              |    |                                   |              |
|              |    |    +----------+     thread 4      |              |
|              |    +----| Endpoints|-------------------|              |
+--------------+         +----------+                   +--------------+

 

The difference between cluster migration and single migration is Endpoint and Endpoints. In cluster migration the Endpoints contains multi Endpoint to point to every master instance in cluster. For example:

3 masters 3 replicas redis cluster. if migrate_threads=4 then we have 3 * 4 = 12 connections that connected with master instance.

Migration performance

The following 3 parameters affect migration performance

migrate_batch_size=4096
migrate_retries=1
migrate_flush=yes

 

  1. migrate_batch_size: By default we use redis pipeline to migrate data to remote. the migrate_batch_size is the pipeline batch size. if migrate_batch_size=1 then the pipeline devolved into 1 single command to sent and wait the response from remote.
  2. migrate_retries: The migrate_retries=1 means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis with migrate_retries times.
  3. migrate_flush: The migrate_flush=yes means we write every 1 command to socket. then we invoke SocketOutputStream.flush() immediately. if migrate_flush=no we invoke SocketOutputStream.flush() when write to socket every 64KB. notice that this parameter also affect migrate_retries. the migrate_retries only take effect when migrate_flush=yes.

Migration principle

+---------------+             +-------------------+    restore      +---------------+ 
|               |             | redis dump format |---------------->|               |
|               |             |-------------------|    restore      |               |
|               |   convert   | redis dump format |---------------->|               |
|    Dump rdb   |------------>|-------------------|    restore      |  Targe Redis  |
|               |             | redis dump format |---------------->|               |
|               |             |-------------------|    restore      |               |
|               |             | redis dump format |---------------->|               |
+---------------+             +-------------------+                 +---------------+

 

Limitation of migration

  1. We use cluster's nodes.conf to migrate data to cluster. because of we didn't handle the MOVED ASK redirection. so limitation of cluster migration is that the cluster MUST in stable state during the migration. this means the cluster MUST have no migratingimporting slot and no switch slave to master.
  2. If use rst migrate data to cluster. the following commands not supported PUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA. and the following commands RPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTORE ONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg: del {user}:1 {user}:2)

Hack ret

What ret command do

  1. ret command that allow user define there own sink service like sink redis data to mysql or mongodb.
  2. ret command using Java SPI extension to do this job.

How to implement a sink service

User should follow the steps below to implement a sink service.

  1. create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    
    <groupId>com.your.company</groupId>
    <artifactId>your-sink-service</artifactId>
    <version>1.0.0</version>
    
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.moilioncircle</groupId>
            <artifactId>redis-rdb-cli-api</artifactId>
            <version>1.8.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>com.moilioncircle</groupId>
            <artifactId>redis-replicator</artifactId>
            <version>[3.6.4, )</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.25</version>
            <scope>provided</scope>
        </dependency>
        
        <!-- 
        <dependency>
            other dependencies
        </dependency>
        -->
        
    </dependencies>
    
    <build>
        <plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.1.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.1</version>
                <configuration>
                    <source>${maven.compiler.source}</source>
                    <target>${maven.compiler.target}</target>
                    <encoding>${project.build.sourceEncoding}</encoding>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

 

  1. implement SinkService interface
public class YourSinkService implements SinkService {

    @Override
    public String sink() {
        return "your-sink-service";
    }

    @Override
    public void init(File config) throws IOException {
        // parse your external sink config
    }

    @Override
    public void onEvent(Replicator replicator, Event event) {
        // your sink business
    }
}

 

  1. register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/

|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService

# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkService

your.package.YourSinkService

 

  1. package and deploy
$ mvn clean install

$ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib

 

  1. run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service

 

  1. debug your sink service
public static void main(String[] args) throws Exception {
        Replicator replicator = new RedisReplicator("redis://127.0.0.1:6379");
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            Replicators.closeQuietly(replicator);
        }));
        replicator.addExceptionListener((rep, tx, e) -> {
            throw new RuntimeException(tx.getMessage(), tx);
        });
        SinkService sink = new YourSinkService();
        sink.init(new File("/path/to/your-sink.conf"));
        replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));
        replicator.open();
    }

 

How to implement a formatter service

  1. create YourFormatterService extend AbstractFormatterService
public class YourFormatterService extends AbstractFormatterService {

    @Override
    public String format() {
        return "test";
    }

    @Override
    public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {
        byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);
        getEscaper().encode(key, getOutputStream());
        getEscaper().encode(val, getOutputStream());
        getOutputStream().write('\n');
        return context;
    }
}

 

  1. register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/

|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService

# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterService

your.package.YourFormatterService

 

  1. package and deploy
$ mvn clean install

$ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib

 

  1. run your formatter service
$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json

 

Contributors

Consulting

Commercial support for redis-rdb-cli is available. The following services are currently available:

  • Onsite consulting. $10,000 per day
  • Onsite training. $10,000 per day

You may also contact Baoyi Chen directly, mail to [email protected].

Supported by 宁文君

27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going. Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.

Supported by IntelliJ IDEA

IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.



标签:github,cli,dump,migrate,redis,rdb,path
From: https://blog.51cto.com/u_11529070/8016528

相关文章

  • 「Java开发指南」如何在MyEclipse中使用JPA和Spring管理事务?(二)
    本教程中介绍一些基于JPA/spring的特性,重点介绍JPA-Spring集成以及如何利用这些功能。您将学习如何:为JPA和Spring设置一个项目逆向工程数据库表来生成实体实现创建、检索、编辑和删除功能启用容器管理的事务在上文中,我们为大家介绍了如何用JPA和SpringFacets创建一个Java......
  • 基于redis实现防重提交
    自定义防重提交1.自定义注解importjava.lang.annotation.*;/***自定义防重提交*@author*@date2023年9月6日11:19:13*/@Documented@Target(ElementType.METHOD)@Retention(RetentionPolicy.RUNTIME)public@interfaceRepeatSubmit{/***默认防......
  • 后浪搞的在线版 Windows 12「GitHub 热点速览」
    本周比较火的莫过于3位初中生开源的Windows12网页版,虽然项目完成度不如在线版的Windows11,但是不妨一看。除了后生可畏的win12之外,开源不到一周的open-interpreter表现也很抢眼,一个在终端就能使唤的AI助手获得了15k+star。还有深度开源的deepin-unioncodeIDE表现......
  • 国内访问Github的方法
    简要修改hosts文件,达到绕过国内DNS解析的目的,提升Github访问速度。查询IP通过以下网站https://www.ipaddress.com/http://ping.chinaz.com/直接查询如下网站的IP地址github.comassets-cdn.github.comgithub.global.ssl.fastly.net修改hosts文件hosts文件的位置在(wind......
  • 《面试1v1》CountDownLatch 和 CyclicBarrier
    我是javapub,一名Markdown程序员从......
  • redis实现短信登录流程
    (redis实现短信登录)最近在学习使用redis,实现一个简单的短信登录功能(没使用第三方api发送短信),使用的是黑马点评项目<aname="pf8N1"></a>先用session实现,再用redis代替session一、基于session实现短信登录的流程<aname="N2YCq"></a>发送短信验证码根据上边的流程图写@Post......
  • 分布式锁优化(基于redisson实现)
    基于setnx实现的分布式锁存在下面的问题:1.不可重入同一个线程无法多次获取同一把锁2.不可重试获取锁只尝试一次就返回false,没有重试机制3.超时释放锁超时释放虽然可以避免死锁,但如果是业务执行耗时较长,也会导致锁释放,存在安全隐患4.主从一致性(主写从读)如果Redis提供了主从集群,主......
  • 转载 vue-cli5降级为vue-cli4
    vue-cli5降级为vue-cli41.启动命令行,运行npmconfigls-l,找到userconfig路径下的.npmrc文件进行删除;2.输入wherevue,把vuevue.cmd这两个文件删除;3.输入vue-V,会显示‘vue’不是内部或外部命令,也不是可运行的程序或批处理文件。4.运行npmunivue-cli-g;5.安装任意指定......
  • Redis 6 学习笔记 4 —— 通过秒杀案例,学习并发相关和apache bench的使用,记录遇到的问
    背景这是某硅谷的redis案例,主要问题是解决计数器和人员记录的事务操作按照某硅谷的视频敲完之后出现这样乱码加报错的问题 乱码的问题要去tomcat根目录的conf文件夹下修改logging.properties,把下面两个encoding参数都改成GBK就行。其实错误也很明显(ClassNotFoundExceptio......
  • docker安装redis
    docker安装Redis拉取镜像dockerpullredis创建目录mkdir/tool/redis镜像里不包含配置文件,需要拉取redis最新的配置文件,查看下载完成直接通过ftp传到/tool/reids目录下就行因为是官方配置,需要我们手动改下配置:#常用配置bind127.0.0.1 #注释掉这部分,使redis可以......