redis持久化 rdb aof vim redis.conf protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes pidfile /var/run/redis_6379.pid loglevel notice logfile "/var/log/redis/redis.log" databases 16 always-show-logo no set-proc-title yes proc-title-template "{title} {listen-addr} {server-mode}" save 900 1 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb rdb-del-sync-files no dir /var/lib/redis replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-diskless-load disabled repl-disable-tcp-nodelay no replica-priority 100 acllog-max-len 128 requirepass lzjasdqq lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no lazyfree-lazy-user-del no lazyfree-lazy-user-flush no oom-score-adj no oom-score-adj-values 0 200 800 disable-thp yes appendonly yes appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes jemalloc-bg-thread yes ---- loglevel notice logfile "/var/log/redis/redis.log" daemonize yes save 900 1 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb requirepass lzjasdqq appendonly yes appendfilename "appendonly.aof" appendfsync everysec dir /var/lib/redis --- mkdir -p /var/log/redis mkdir -p /var/lib/redis systemctl restart redis [root@host3 bin]# ls /var/lib/redis/ appendonly.aof dump.rdb [root@host1 bin]# ls /var/lib/redis/ appendonly.aof dump.rdb 127.0.0.1:6379> config get dir 1) "dir" 2) "/var/lib/redis" 127.0.0.1:6379> config get dbfilename 1) "dbfilename" 2) "dump.rdb" 127.0.0.1:6379> config get appendonly 1) "appendonly" 2) "yes" 127.0.0.1:6379> config get appendfilename 1) "appendfilename" 2) "appendonly.aof" 持久本身不能取代备份;还应该制定备份策略,对redis数据库定期进行备份 save命令在dir目录中创建dump.rdb文件 bgsave命令在dir目录中创建dump.rdb文件 恢复数据将dump.rdb文件放在dir目录中 dir目录获取 config get dir 或者查看配置文件中的dir redis性能测试 同时执行 10000 个请求来检测性能: redis-benchmark -n 10000 -q redis-benchmark -n 10000 -q -a lzjasdqq -a 密码 [root@host1 bin]# redis-benchmark -n 10000 -q -a lzjasdqq PING_INLINE: 92592.59 requests per second, p50=0.271 msec PING_MBULK: 85470.09 requests per second, p50=0.295 msec SET: 72992.70 requests per second, p50=0.559 msec GET: 82644.62 requests per second, p50=0.303 msec INCR: 69930.07 requests per second, p50=0.551 msec LPUSH: 72463.77 requests per second, p50=0.567 msec RPUSH: 81967.21 requests per second, p50=0.487 msec LPOP: 58479.53 requests per second, p50=0.615 msec RPOP: 52083.33 requests per second, p50=0.535 msec SADD: 90909.09 requests per second, p50=0.279 msec HSET: 62893.08 requests per second, p50=0.671 msec SPOP: 70422.53 requests per second, p50=0.327 msec ZADD: 81967.21 requests per second, p50=0.295 msec ZPOPMIN: 87719.30 requests per second, p50=0.295 msec LPUSH (needed to benchmark LRANGE): 45454.55 requests per second, p50=0.743 msec LRANGE_100 (first 100 elements): 40650.41 requests per second, p50=0.615 msec LRANGE_300 (first 300 elements): 19607.84 requests per second, p50=1.279 msec LRANGE_500 (first 500 elements): 12886.60 requests per second, p50=1.943 msec LRANGE_600 (first 600 elements): 11574.07 requests per second, p50=2.167 msec MSET (10 keys): 46082.95 requests per second, p50=0.759 msec redis-benchmark -p 6379 -n 100000 -c 20 -a lzjasdqq root@host3 bin]# redis-benchmark --help Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests>] [-k <boolean>] -h <hostname> Server hostname (default 127.0.0.1) -p <port> Server port (default 6379) -s <socket> Server socket (overrides host and port) -a <password> Password for Redis Auth --user <username> Used to send ACL style 'AUTH username pass'. Needs -a. -c <clients> Number of parallel connections (default 50) -n <requests> Total number of requests (default 100000) -d <size> Data size of SET/GET value in bytes (default 3) --dbnum <db> SELECT the specified db number (default 0) --threads <num> Enable multi-thread mode. --cluster Enable cluster mode. --enable-tracking Send CLIENT TRACKING on before starting benchmark. -k <boolean> 1=keep alive 0=reconnect (default 1) -r <keyspacelen> Use random keys for SET/GET/INCR, random values for SADD, random members and scores for ZADD. Using this option the benchmark will expand the string __rand_int__ inside an argument with a 12 digits number in the specified range from 0 to keyspacelen-1. The substitution changes every time a command is executed. Default tests use this to hit random keys in the specified range. -P <numreq> Pipeline <numreq> requests. Default 1 (no pipeline). -q Quiet. Just show query/sec values --precision Number of decimal places to display in latency output (default 0) --csv Output in CSV format -l Loop. Run the tests forever -t <tests> Only run the comma separated list of tests. The test names are the same as the ones produced as output. -I Idle mode. Just open N idle connections and wait. --help Output this help and exit. --version Output version and exit. Examples: Run the benchmark with the default configuration against 127.0.0.1:6379: $ redis-benchmark Use 20 parallel clients, for a total of 100k requests, against 192.168.1.1: $ redis-benchmark -h 192.168.1.1 -p 6379 -n 100000 -c 20 Fill 127.0.0.1:6379 with about 1 million keys only using the SET test: $ redis-benchmark -t set -n 1000000 -r 100000000 Benchmark 127.0.0.1:6379 for a few commands producing CSV output: $ redis-benchmark -t ping,set,get -n 100000 --csv Benchmark a specific command line: $ redis-benchmark -r 10000 -n 10000 eval 'return redis.call("ping")' 0 Fill a list with 10000 random elements: $ redis-benchmark -r 10000 -n 10000 lpush mylist __rand_int__ redis-benchmark -h 127.0.0.1 -p 6379 -t get -n 10000 -q -a lzjasdqq redis-benchmark -h 127.0.0.1 -p 6379 -t get -n 1000000000 -q -a lzjasdqq GET: 90909.09 requests per second, p50=0.263 msec
标签:持久,msec,redis,second,requests,p50,yes From: https://www.cnblogs.com/lzjloveit/p/18183468