1.member下数据说明:
snap:存放快照数据,etcd防止WAL文件过多而设置的快照,存储etcd数据状态
wal:存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中
2.备份,只需要备份其中一台etcd export ETCDCTL_API=3
/home/s/bin/etcdctl --endpoints=https://ip:2379 --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem snapshot save /home/s/backup/etcd/etcd.db
[root@localhost kube_etcd]# /home/s/bin/etcdctl --endpoints=https://11.0.1.149:2379 --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem snapshot save /home/s/etcd.db {"level":"info","ts":1691654104.1296446,"caller":"snapshot/v3_snapshot.go:68","msg":"created temporary db file","path":"/home/s/etcd.db.part"} {"level":"info","ts":1691654104.1369681,"logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1691654104.1370292,"caller":"snapshot/v3_snapshot.go:76","msg":"fetching snapshot","endpoint":"https://11.0.1.149:2379"} {"level":"info","ts":1691654104.4866412,"logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"} {"level":"info","ts":1691654104.525598,"caller":"snapshot/v3_snapshot.go:91","msg":"fetched snapshot","endpoint":"https://11.0.1.149:2379","size":"23 MB","took":"now"} {"level":"info","ts":1691654104.5257566,"caller":"snapshot/v3_snapshot.go:100","msg":"saved","path":"/home/s/etcd.db"} Snapshot saved at /home/s/etcd.db
3.恢复数据, 恢复需要三台etcd都执行
a、关闭etcd服务: systemctl stop kube_etcd
b、移除旧数据
rm -rf /home/s/data/kube_etcd/member
c、复制etcd.db备份文件到另外两台
for i etcd2,etcd3;do scp etcd.db root@i:/root ;done
d、恢复数据
[root@localhost data]# /home/s/bin/etcdctl --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem snapshot restore /root/etcd-snap73.db --data-dir=/home/s/data/kube_etcd/ --initial-advertise-peer-urls=https://11.0.1.149:2380 --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380" --name 11.0.1.149 Deprecated: Use `etcdutl snapshot restore` instead. 2023-08-10T16:06:07+08:00 info snapshot/v3_snapshot.go:251 restoring snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd/", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"} 2023-08-10T16:06:07+08:00 info membership/store.go:119 Trimming membership information from the backend... 2023-08-10T16:06:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]} 2023-08-10T16:06:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]} 2023-08-10T16:06:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]} 2023-08-10T16:06:07+08:00 info snapshot/v3_snapshot.go:272 restored snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd/", "snap-dir": "/home/s/data/kube_etcd/member/snap"}
[root@localhost kube_etcd]# export ETCDCTL_API=3 [root@localhost kube_etcd]# ls [root@localhost kube_etcd]# /home/s/bin/etcdctl --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem snapshot restore /root/etcd-snap73.db --data-dir=/home/s/data/kube_etcd --initial-advertise-peer-urls=https://11.0.1.150:2380 --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380" --name 11.0.1.150 Deprecated: Use `etcdutl snapshot restore` instead. 2023-08-10T16:07:39+08:00 info snapshot/v3_snapshot.go:251 restoring snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"} 2023-08-10T16:07:39+08:00 info membership/store.go:119 Trimming membership information from the backend... 2023-08-10T16:07:39+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]} 2023-08-10T16:07:39+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]} 2023-08-10T16:07:39+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]} 2023-08-10T16:07:39+08:00 info snapshot/v3_snapshot.go:272 restored snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap"} [root@localhost kube_etcd]# ls member
[root@localhost kube_etcd]# export ETCDCTL_API=3 [root@localhost kube_etcd]# /home/s/bin/etcdctl --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem snapshot restore /root/etcd-snap73.db --data-dir=/home/s/data/kube_etcd --initial-advertise-peer-urls=https://11.0.1.151:2380 --initial-cluster="11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380" --name 11.0.1.151 Deprecated: Use `etcdutl snapshot restore` instead. 2023-08-10T16:09:06+08:00 info snapshot/v3_snapshot.go:251 restoring snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"} 2023-08-10T16:09:07+08:00 info membership/store.go:119 Trimming membership information from the backend... 2023-08-10T16:09:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "6e45a7efe4f43e72", "added-peer-peer-urls": ["https://11.0.1.149:2380"]} 2023-08-10T16:09:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "83b35c880ec44a22", "added-peer-peer-urls": ["https://11.0.1.151:2380"]} 2023-08-10T16:09:07+08:00 info membership/cluster.go:393 added member {"cluster-id": "af94405c9fa143f5", "local-member-id": "0", "added-peer-id": "d0c21986d57fb7ce", "added-peer-peer-urls": ["https://11.0.1.150:2380"]} 2023-08-10T16:09:07+08:00 info snapshot/v3_snapshot.go:272 restored snapshot {"path": "/root/etcd-snap73.db", "wal-dir": "/home/s/data/kube_etcd/member/wal", "data-dir": "/home/s/data/kube_etcd", "snap-dir": "/home/s/data/kube_etcd/member/snap"}
e、启动服务,三台都执行
注意: 由于我的服务启动用户是bsafe, 所以要修改member目录用户
[root@localhost kube_etcd]# chown -R bsafe:bsafe /home/s/data/kube_etcd/member [root@localhost kube_etcd]# ll total 0 drwx------. 4 bsafe bsafe 29 Aug 10 16:09 member [root@localhost kube_etcd]# systemctl start kube_etcd [root@localhost kube_etcd]# systemctl status kube_etcd ● kube_etcd.service - kubenetes etcd key-value store Loaded: loaded (/etc/systemd/system/kube_etcd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2023-08-10 16:11:18 CST; 3s ago Docs: https://github.com/coreos/etcd Main PID: 11400 (etcd) CGroup: /system.slice/kube_etcd.service └─11400 /home/s/bin/etcd --config-file /home/s/etc/kube_etcd/kube_etcd.conf Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...7efe4f43e72"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...7efe4f43e72"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...7efe4f43e72"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.106+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...986d57fb7ce"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-me...986d57fb7ce"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...986d57fb7ce"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-r...986d57fb7ce"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.107+0800","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":16} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.109+0800","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"11.0.1.151:2380"} Aug 10 16:11:18 localhost.localdomain etcd[11400]: {"level":"info","ts":"2023-08-10T16:11:18.109+0800","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"11.0.1.151:2380"} Hint: Some lines were ellipsized, use -l to show in full.
f、查看etcd集群状态
[root@localhost data]# /home/s/bin/etcdctl --endpoints=https://11.0.1.149:2379,https://11.0.1.150:2379,https://11.0.1.151:2379 --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem endpoint status --write-out=table +-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://11.0.1.149:2379 | 6e45a7efe4f43e72 | 3.5.0 | 23 MB | false | false | 2 | 11 | 11 | | | https://11.0.1.150:2379 | d0c21986d57fb7ce | 3.5.0 | 23 MB | false | false | 2 | 11 | 11 | | | https://11.0.1.151:2379 | 83b35c880ec44a22 | 3.5.0 | 23 MB | true | false | 2 | 11 | 11 | | +-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------
[root@localhost data]# /home/s/bin/etcdctl --endpoints=https://11.0.1.149:2379,https://11.0.1.150:2379,https://11.0.1.151:2379 --cacert=/home/s/cert/kube_etcd/ca.pem --cert=/home/s/cert/kube_etcd/client.pem --key=/home/s/cert/kube_etcd/client-key.pem endpoint health https://11.0.1.151:2379 is healthy: successfully committed proposal: took = 19.705544ms https://11.0.1.150:2379 is healthy: successfully committed proposal: took = 19.187536ms https://11.0.1.149:2379 is healthy: successfully committed proposal: took = 23.052489ms
说明:
1.k8s下, 容器部署的etcd, 原理一样, 注意停服:/etc/kubernetes/manifests 把静态pod yaml移走就会自动停服
2.etcd配置文件
name: "11.0.1.149" data-dir: /home/s/data/kube_etcd wal-dir: snapshot-count: 50000 heartbeat-interval: 300 election-timeout: 5000 max-request-bytes: 10485760 quota-backend-bytes: 5368709120 listen-peer-urls: https://11.0.1.149:2380 listen-client-urls: http://localhost:2379,https://11.0.1.149:2379 max-snapshots: 5 max-wals: 3 cors: initial-advertise-peer-urls: https://11.0.1.149:2380 advertise-client-urls: https://11.0.1.149:2379 discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: "11.0.1.149=https://11.0.1.149:2380,11.0.1.150=https://11.0.1.150:2380,11.0.1.151=https://11.0.1.151:2380" initial-cluster-token: 'etcd-cluster' initial-cluster-state: 'existing' strict-reconfig-check: false enable-v2: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: ca-file: cert-file: /home/s/cert/kube_etcd/peer.pem key-file: /home/s/cert/kube_etcd/peer-key.pem client-cert-auth: False trusted-ca-file: /home/s/cert/kube_etcd/ca.pem auto-tls: false peer-transport-security: ca-file: cert-file: /home/s/cert/kube_etcd/peer.pem key-file: /home/s/cert/kube_etcd/peer-key.pem client-cert-auth: False trusted-ca-file: /home/s/cert/kube_etcd/ca.pem auto-tls: false debug: false log-package-levels: force-new-cluster: false auto-compaction-mode: periodic auto-compaction-retention: "1"View Code
标签:备份,11.0,snapshot,etcd,go,home,kube,数据库 From: https://www.cnblogs.com/aroin/p/17620703.html