首页 > 数据库 >数据库-etcd备份恢复

数据库-etcd备份恢复

时间:2023-08-10 16:37:59浏览次数:46  
标签:备份 11.0 snapshot etcd go home kube 数据库

1.member下数据说明:

snap:存放快照数据,etcd防止WAL文件过多而设置的快照,存储etcd数据状态
wal:存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中

 

2.备份,只需要备份其中一台etcd export ETCDCTL_API=3

/home/s/bin/etcdctl  --endpoints=https://ip:2379  --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  snapshot save /home/s/backup/etcd/etcd.db

[root@localhost kube_etcd]# /home/s/bin/etcdctl  --endpoints=https://11.0.1.149:2379  --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  snapshot save /home/s/etcd.db
{"level":"info","ts":1691654104.1296446,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/home/s/etcd.db.part"}
{"level":"info","ts":1691654104.1369681,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1691654104.1370292,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"https://11.0.1.149:2379"}
{"level":"info","ts":1691654104.4866412,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":1691654104.525598,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"https://11.0.1.149:2379","size":"23 MB","took":"now"}
{"level":"info","ts":1691654104.5257566,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/home/s/etcd.db"}
Snapshot saved at /home/s/etcd.db

 

 

3.恢复数据, 恢复需要三台etcd都执行

a、关闭etcd服务: systemctl stop kube_etcd

b、移除旧数据

 rm -rf /home/s/data/kube_etcd/member

c、复制etcd.db备份文件到另外两台

for i etcd2,etcd3;do scp etcd.db root@i:/root ;done

d、恢复数据

[root@localhost data]# /home/s/bin/etcdctl   --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  snapshot restore /root/etcd-snap73.db  --data-dir=/home/s/data/kube_etcd/   --initial-advertise-peer-urls=https://11.0.1.149:2380  --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380"  --name 11.0.1.149
Deprecated: Use `etcdutl snapshot restore` instead.

2023-08-10T16:06:07+08:00    info    snapshot/v3_snapshot.go:251    restoring snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd/", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
2023-08-10T16:06:07+08:00    info    membership/store.go:119    Trimming membership information from the backend...
2023-08-10T16:06:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]}
2023-08-10T16:06:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]}
2023-08-10T16:06:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]}
2023-08-10T16:06:07+08:00    info    snapshot/v3_snapshot.go:272    restored snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd/", "snap-dir": "/home/s/data/kube_etcd/member/snap"}
[root@localhost kube_etcd]# export ETCDCTL_API=3
[root@localhost kube_etcd]# ls
[root@localhost kube_etcd]# /home/s/bin/etcdctl   --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  snapshot restore /root/etcd-snap73.db  --data-dir=/home/s/data/kube_etcd   --initial-advertise-peer-urls=https://11.0.1.150:2380  --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380"  --name 11.0.1.150
Deprecated: Use `etcdutl snapshot restore` instead.

2023-08-10T16:07:39+08:00    info    snapshot/v3_snapshot.go:251    restoring snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
2023-08-10T16:07:39+08:00    info    membership/store.go:119    Trimming membership information from the backend...
2023-08-10T16:07:39+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]}
2023-08-10T16:07:39+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]}
2023-08-10T16:07:39+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]}
2023-08-10T16:07:39+08:00    info    snapshot/v3_snapshot.go:272    restored snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap"}
[root@localhost kube_etcd]# ls
member
[root@localhost kube_etcd]# export ETCDCTL_API=3
[root@localhost kube_etcd]# /home/s/bin/etcdctl   --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  snapshot restore /root/etcd-snap73.db  --data-dir=/home/s/data/kube_etcd   --initial-advertise-peer-urls=https://11.0.1.151:2380  --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380"  --name 11.0.1.151
Deprecated: Use `etcdutl snapshot restore` instead.

2023-08-10T16:09:06+08:00    info    snapshot/v3_snapshot.go:251    restoring snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
2023-08-10T16:09:07+08:00    info    membership/store.go:119    Trimming membership information from the backend...
2023-08-10T16:09:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]}
2023-08-10T16:09:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]}
2023-08-10T16:09:07+08:00    info    membership/cluster.go:393    added member    {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]}
2023-08-10T16:09:07+08:00    info    snapshot/v3_snapshot.go:272    restored snapshot    {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap"}

e、启动服务,三台都执行

注意: 由于我的服务启动用户是bsafe, 所以要修改member目录用户

[root@localhost kube_etcd]# chown -R bsafe:bsafe /home/s/data/kube_etcd/member
[root@localhost kube_etcd]# ll
total 0
drwx------. 4 bsafe bsafe 29 Aug 10 16:09 member
[root@localhost kube_etcd]# systemctl start kube_etcd
[root@localhost kube_etcd]# systemctl status kube_etcd
● kube_etcd.service - kubenetes etcd key-value store
   Loaded: loaded (/etc/systemd/system/kube_etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2023-08-10 16:11:18 CST; 3s ago
     Docs: https://github.com/coreos/etcd
 Main PID: 11400 (etcd)
   CGroup: /system.slice/kube_etcd.service
           └─11400 /home/s/bin/etcd --config-file /home/s/etc/kube_etcd/kube_etcd.conf

Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...7efe4f43e72"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...7efe4f43e72"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...7efe4f43e72"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...986d57fb7ce"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...986d57fb7ce"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...986d57fb7ce"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...986d57fb7ce"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":16}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.109+0800","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"11.0.1.151:2380"}
Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.109+0800","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"11.0.1.151:2380"}
Hint: Some lines were ellipsized, use -l to show in full.

f、查看etcd集群状态

[root@localhost data]#  /home/s/bin/etcdctl  --endpoints=https://11.0.1.149:2379,https://11.0.1.150:2379,https://11.0.1.151:2379  --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  endpoint status --write-out=table
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://11.0.1.149:2379 | 6e45a7efe4f43e72 |   3.5.0 |   23 MB |     false |      false |         2 |         11 |                 11 |        |
| https://11.0.1.150:2379 | d0c21986d57fb7ce |   3.5.0 |   23 MB |     false |      false |         2 |         11 |                 11 |        |
| https://11.0.1.151:2379 | 83b35c880ec44a22 |   3.5.0 |   23 MB |      true |      false |         2 |         11 |                 11 |        |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------

 

[root@localhost data]#  /home/s/bin/etcdctl  --endpoints=https://11.0.1.149:2379,https://11.0.1.150:2379,https://11.0.1.151:2379  --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem  endpoint health
https://11.0.1.151:2379 is healthy: successfully committed proposal: took = 19.705544ms
https://11.0.1.150:2379 is healthy: successfully committed proposal: took = 19.187536ms
https://11.0.1.149:2379 is healthy: successfully committed proposal: took = 23.052489ms

 

 

说明:

1.k8s下, 容器部署的etcd, 原理一样, 注意停服:/etc/kubernetes/manifests 把静态pod yaml移走就会自动停服
2.etcd配置文件

name: "11.0.1.149"
data-dir: /home/s/data/kube_etcd
wal-dir: 
snapshot-count: 50000
heartbeat-interval: 300
election-timeout: 5000
max-request-bytes: 10485760
quota-backend-bytes: 5368709120
listen-peer-urls: https://11.0.1.149:2380
listen-client-urls: http://localhost:2379,https://11.0.1.149:2379
max-snapshots: 5
max-wals: 3
cors: 
initial-advertise-peer-urls: https://11.0.1.149:2380
advertise-client-urls: https://11.0.1.149:2379
discovery: 
discovery-fallback: 'proxy'
discovery-proxy: 
discovery-srv: 

initial-cluster: "11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380"

initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'existing'
strict-reconfig-check: false
enable-v2: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0

client-transport-security: 
  ca-file: 
  cert-file: /home/s/cert/kube_etcd/peer.pem
  key-file: /home/s/cert/kube_etcd/peer-key.pem
  client-cert-auth: False
  trusted-ca-file: /home/s/cert/kube_etcd/ca.pem
  auto-tls: false

peer-transport-security: 
  ca-file:
  cert-file: /home/s/cert/kube_etcd/peer.pem
  key-file: /home/s/cert/kube_etcd/peer-key.pem
  client-cert-auth: False
  trusted-ca-file: /home/s/cert/kube_etcd/ca.pem
  auto-tls: false

debug: false
log-package-levels: 
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"
View Code

 

标签:备份,11.0,snapshot,etcd,go,home,kube,数据库
From: https://www.cnblogs.com/aroin/p/17620703.html

相关文章

  • linux连接Windows上的数据库
    /*定义一些数据库连接需要的宏*/#include<stdio.h>#include<string.h>#include<stdlib.h>/*引入连接Mysql的头文件*/#include<mysql/mysql.h>#defineHOST"Linux的IP地址"/*MySql服务器地址*/#defineUSERNAME"root"/*用户名*/#definePASSWORD&......
  • openGauss数据库在CentOS上的安装实践
    本文分享自华为云社区《openGauss数据库在CentOS上的安装实践》,作者:Gauss小松鼠。1.安装前准备安装数据库前先要有已安装centOS7.6的服务器+数据库安装包。首先找小伙伴申请了华为云ECS服务器安装好了OS,这里使用的是x86_64+centos。华为云服务器现在可是很划算呢,安装也很方......
  • Linux的MySQL数据库安装部署
    简介MySQL数据库,是知名的数据库系统,其特点是:轻量,简单,功能丰富。MySQL常用版本有MySQL5.7版本安装MySQL8.x版本安装MySQL在CentOS系统安装(5.7版本与8.0版本)注意:安装操作需要root权限安装配置yum仓库#导入更新密钥rpm--importhttps://repo.mysql.com/RPM-GPG......
  • MySQL统计各种数据库对象大小
    MySQL统计各种数据库对象大小;包含:数据库、表、索引等脚本使用示例统计实例中各数据库大小SELECTTABLE_SCHEMA,round(SUM(data_length+index_length)/1024/1024,2)ASTOTAL_MB,round(SUM(data_length)/1024/1024,2)ASDATA_MB,round(SUM(index_length)/1024/1024,2)ASINDEX......
  • postger数据库使用开窗函数删除表内重复数据
    使用id字段开窗(也可以多个字段,但是any函数和arry函数需要替换)select*fromgatherdata.temp_zyr_export_1awherea.linkid=any(array(selectlinkidfrom(selectrow_number()over(partitionbylinkid),linkidfromgatherdata.temp_zyr_export_1)twheret.ro......
  • 记录一次解决数据库连接池连接泄露BUG
    1BUG现象系统并发请求,系统停滞无法使用,所有接口都是无法与后端进行交互的状态,系统并没有宕机2BUG的业务流程插入分数方法涉及插入表ABCD加了声明式事务查询分数方法涉及表ABCDcontroller(){ @Transactional insertVo(); selectById();}3排查原因因为代码不是......
  • Confluence的数据迁移备份与恢复
    Confluence的数据迁移备份与恢复news/2023/8/913:17:34目录一、简介二、数据备份(默认系统会自动备份,不需要手动) 2、点击左上角的设置按钮,选择“一般配置”选项。3、选择“备份与还原”选项 4、开始手动备份三、数据恢复1、使用管理员账号登录 2、选择“备份与......
  • 耗时6个月,我做了一款干净、免费、开源的AI数据库
    一、Chat2DB简介在消失的这段时间,我和小伙伴们做了一款集成了AI的数据库管理工具Chat2DB。他是数据库也集成了AIGC的能力,能够将自然语言转换为SQL,也可以将SQL转换为自然语言,还可以给出SQL的优化建议,可以极大提升效率。GitHub地址:https://github.com/chat2db/chat2db官网地址:ht......
  • 数据结构与数据库选型:构建高效业务系统的关键要素
    数据结构与数据库选型:构建高效业务系统的关键要素构建高效业务系统的关键要素之一是选择合适的数据结构和数据库。下面是一些关于数据结构和数据库选型的考虑因素:1.数据结构:-选择最适合业务需求的数据结构是非常重要的。常见的数据结构包括数组、链表、栈、队列、哈希表、......
  • MYSQL数据库 学习大全
    MYSQL数据库目前广泛的应用在各种个人、商务系统中,各种技术都比较成熟。把自己学习的一些过程总结一下,该文章设计到的内容都没有做太具体的阐述,只是一个简单的入门手册,假如想看更多内容可以参看MYSQL的联机手册。1MYSQL安装先下载安装包:mysql-5.0.27-win32.zipmysq......