首页 > 其他分享 >etcd备份和恢复

etcd备份和恢复

时间:2023-05-29 09:24:14浏览次数:39  
标签:https -- 恢复 备份 192.168 snapshot etcd go

1. 首先查看etcd集群节点信息

[root@host105 cert]# ETCDCTL_API=3 etcdctl --cacert=/opt/cert//etcd.pem --cert=/opt/cert//etcd.pem --key=/opt/cert//etcd-key.pem --endpoints="https://192.168.0.105:2379,https://192.168.0.106:2379,https://192.168.0.189:2379" endpoint health
https://192.168.0.106:2379 is healthy: successfully committed proposal: took = 13.787391ms
https://192.168.0.105:2379 is healthy: successfully committed proposal: took = 13.955548ms
https://192.168.0.189:2379 is healthy: successfully committed proposal: took = 14.370631ms 检查节点列表 [root@host106 ~]# ETCDCTL_API=3 etcdctl --cacert=/opt/cert//etcd.pem --cert=/opt/cert//etcd.pem --key=/opt/cert//etcd-key.pem --endpoints="https://192.168.0.105:2379,https://192.168.0.106:2379,https://192.168.0.189:2379" endpoint status --write-out='table' +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.0.105:2379 | a1521095cffde44b | 3.5.4 | 20 kB | false | false | 5 | 16 | 16 | | | https://192.168.0.106:2379 | eb83c838a536671f | 3.5.4 | 20 kB | true | false | 5 | 16 | 16 | | | https://192.168.0.189:2379 | ce9e5937db5e0599 | 3.5.4 | 20 kB | false | false | 5 | 16 | 16 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

 

2. 在leader节点备份etcd数据

[root@host106 ~]# etcdctl --cacert=/opt/cert//etcd.pem --cert=/opt/cert//etcd.pem --key=/opt/cert//etcd-key.pem --endpoints="https://192.168.0.106:2379" snapshot save etcd02-bak-20230526.db
{"level":"info","ts":"2023-05-26T16:32:39.442+0800","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"etcd02-bak-20230526.db.part"}
{"level":"info","ts":"2023-05-26T16:32:39.448+0800","logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-05-26T16:32:39.448+0800","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://192.168.0.106:2379"}
{"level":"info","ts":"2023-05-26T16:32:39.477+0800","logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-05-26T16:32:39.498+0800","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://192.168.0.106:2379","size":"20 kB","took":"now"}
{"level":"info","ts":"2023-05-26T16:32:39.498+0800","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"etcd02-bak-20230526.db"}
Snapshot saved at etcd02-bak-20230526.db

 

3. 停止etcd节点

systemctl stop etcd.service

 

4. 三个节点上分别都删除数据

rm /opt/etcd-v3.5.4/data/* -rf

5. 复制备份到每个节点

scp  etcd02-bak-20230526.db root@105:/root

 

 

6.  恢复etcd数据

节点2恢复命令如下:

ETCDCTL_API=3 etcdctl snapshot restore /root/etcd02-bak-20230526.db \
--name etcd2 \
--initial-cluster="etcd1=https://192.168.0.105:2380,etcd2=https://192.168.0.106:2380,etcd3=https://192.168.0.189:2380" \
--initial-cluster-token=etcd-cluster \
--initial-advertise-peer-urls=https://192.168.0.106:2380 \
--data-dir=/opt/etcd-v3.5.4/data/

运行结果如下:

[root@host106 data]# ETCDCTL_API=3 etcdctl snapshot restore /root/etcd02-bak-20230526.db \
> --name etcd2 \
> --initial-cluster="etcd1=https://192.168.0.105:2380,etcd2=https://192.168.0.106:2380,etcd3=https://192.168.0.189:2380" \
> --initial-cluster-token=etcd-cluster \
> --initial-advertise-peer-urls=https://192.168.0.106:2380 \
> --data-dir=/opt/etcd-v3.5.4/data/
Deprecated: Use `etcdutl snapshot restore` instead.

2023-05-29T08:53:54+08:00    info    snapshot/v3_snapshot.go:248    restoring snapshot    {"path": "/root/etcd02-bak-20230526.db", "wal-dir": "/opt/etcd-v3.5.4/data/member/wal", "data-dir": "/opt/etcd-v3.5.4/data/", "snap-dir": "/opt/etcd-v3.5.4/data/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:254\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:129\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdctl/ctlv3/ctl.go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/go/src/go.etcd.io/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/go/gos/go1.16.15/src/runtime/proc.go:225"}
2023-05-29T08:53:54+08:00    info    membership/store.go:141    Trimming membership information from the backend...
2023-05-29T08:53:55+08:00    info    membership/cluster.go:421    added member    {"cluster-id": "5b962c26198c3782", "local-member-id": "0", "added-peer-id": "a1521095cffde44b", "added-peer-peer-urls": ["https://192.168.0.105:2380"]}
2023-05-29T08:53:55+08:00    info    membership/cluster.go:421    added member    {"cluster-id": "5b962c26198c3782", "local-member-id": "0", "added-peer-id": "ce9e5937db5e0599", "added-peer-peer-urls": ["https://192.168.0.189:2380"]}
2023-05-29T08:53:55+08:00    info    membership/cluster.go:421    added member    {"cluster-id": "5b962c26198c3782", "local-member-id": "0", "added-peer-id": "eb83c838a536671f", "added-peer-peer-urls": ["https://192.168.0.106:2380"]}
2023-05-29T08:53:55+08:00    info    snapshot/v3_snapshot.go:269    restored snapshot    {"path": "/root/etcd02-bak-20230526.db", "wal-dir": "/opt/etcd-v3.5.4/data/member/wal", "data-dir": "/opt/etcd-v3.5.4/data/", "snap-dir": "/opt/etcd-v3.5.4/data/member/snap"}

 

 

节点1恢复命令如下:

ETCDCTL_API=3 etcdctl snapshot restore /root/etcd02-bak-20230526.db \
--name etcd1 \
 # 这里一定要改
--initial-cluster="etcd1=https://192.168.0.105:2380,etcd2=https://192.168.0.106:2380,etcd3=https://192.168.0.189:2380" \ --initial-cluster-token=etcd-cluster \ --initial-advertise-peer-urls=https://192.168.0.105:2380 \ --data-dir=/opt/etcd-v3.5.4/data/

 

节点3恢复命令如下:

ETCDCTL_API=3 etcdctl snapshot restore /root/etcd02-bak-20230526.db \
--name etcd3 \ # 这里一定要改
--initial-cluster="etcd1=https://192.168.0.105:2380,etcd2=https://192.168.0.106:2380,etcd3=https://192.168.0.189:2380" \
--initial-cluster-token=etcd-cluster \
--initial-advertise-peer-urls=https://192.168.0.189:2380 \
 # 这里一定要改
 --data-dir=/opt/etcd-v3.5.4/data/

 

7. 数据恢复后查看etcd数据目录结构

[root@host106 data]# tree ./member/
./member/
├── snap
│   ├── 0000000000000001-0000000000000003.snap
│   └── db
└── wal
    └── 0000000000000000-0000000000000000.wal

2 directories, 3 files

 

8. 启动etcd集群

[root@host106 data]# systemctl start etcd

[root@host105 ~]# systemctl start etcd

root@mytest:~# systemctl start etcd

 

9.  节点健康检查

[root@host106 data]#  ETCDCTL_API=3 etcdctl --cacert=/opt/cert//etcd.pem --cert=/opt/cert//etcd.pem --key=/opt/cert//etcd-key.pem --endpoints="https://192.168.0.105:2379,https://192.168.0.106:2379,https://192.168.0.189:2379" endpoint health
https://192.168.0.106:2379 is healthy: successfully committed proposal: took = 12.54057ms
https://192.168.0.189:2379 is healthy: successfully committed proposal: took = 13.779164ms
https://192.168.0.105:2379 is healthy: successfully committed proposal: took = 52.266177ms

 

 

 



 

标签:https,--,恢复,备份,192.168,snapshot,etcd,go
From: https://www.cnblogs.com/mjxi/p/17439452.html

相关文章

  • git 文件恢复与项目还原:008
    1.【文件恢复】:将文件恢复到上一次提交的状态注意:新建且没有提交的文件无法使用文件恢复命令:gitcheckout--文件名 假如我们的一开始是这样的,这是没有报错的状态文件 然后我添加了一段内容,比如我添加这段内容项目报错了,我需要恢复到没有报错的状态方法一:代码比......
  • kubernetes重新初始化“[ERROR DirAvailable--var-lib-etcd]”
    [root@master01~]#kubeadminit--config/root/kubeadm-config.yaml--upload-certs[init]UsingKubernetesversion:v1.23.0[preflight]Runningpre-flightcheckserrorexecutionphasepreflight:[preflight]Somefatalerrorsoccurred:[ERRORDirAvailable--......
  • 04、Etcd中常见的概念
    本篇内容主要来源于自己学习的视频,如有侵权,请联系删除,谢谢。上一章节,我们学习了Etcdctl的使用,从中窥探了Etcd的强大之处。从这一节开始,后面的内容基本上都是偏理论的东西,争取在看完这一系列文章后,能够对Etcd有一个入门的了解,这样在日常开发过程中,能够对Etcd底层的原理......
  • 【基于容器的部署、扩展和管理】3.5 高可用性和故障恢复机制
    3.5高可用性和故障恢复机制云原生的高可用性是指在云原生环境中,通过自动化工具和技术手段,实现软件发布的高可用性机制。其主要思想是通过自动化部署、自动化监控、自动化修复等手段,提高软件系统的可用性和稳定性,从而减少系统故障和停机时间。故障恢复机制是指在云原生环境中,当系......
  • MySQL增量备份的使用
    登录mysql数据库创建数据库HB3051,数据库中创建student表设置结构表中插入数据备份HB3051数据库中的student表备份HB3051数据库备份mysql数据库中的user表和server表备份所有数据库删除HB3051数据库恢复备份误删除的HB3051数据库模拟HB3051数据库中的student表备份删除查看表是否删......
  • 代码备份
    #include<dlib/image_processing/frontal_face_detector.h>#include<dlib/gui_widgets.h>#include<dlib/image_io.h>#include<iostream>#include<dlib/opencv.h>#include<opencv2/opencv.hpp>#include<dlib/image_proces......
  • 使用 Linux 命令如何恢复被覆盖的文件 All In One
    使用Linux命令如何恢复被覆盖的文件AllInOne数据还原/数据恢复errors#通配符`*`两边有空格,导致所有文件被覆盖bug❌#$fswebcam--no-banner-r1280*720camera-test.jpg❌demos(......
  • Gitea Docker 备份恢复
    关于GiteaDocker备份恢复,可以参考以下链接:Gitea官方文档:https://docs.gitea.io/en-us/backup-and-restore/DockerHub上的Gitea镜像文档:https://hub.docker.com/r/gitea/gitea/GitHub上的Gitea仓库:https://github.com/go-gitea/gitea以下是一些关于GiteaDocker备......
  • 【K8s入门推荐】K8s1.24版本部署全教程,轻松掌握技巧kubeadm丨Kubernetes丨容器编排丨
    通过kubeadm方式极速部署Kubernetes1.24版本前言在Kubernetes的搭建过程中,繁琐的手动操作和复杂的配置往往会成为制约部署效率的关键因素。而使用kubeadm工具可以避免这些问题,大大提高集群的部署效率和部署质量。本文将为大家详细介绍如何使用kubeadm工具快速搭建Kubernetes1.24......
  • DBeaver连接mysql数据库和备份恢复那些事
    引言上一篇文章,主要讲解的是如何使用DBeaver连接oracle数据库,同时和大家扩展的聊了聊oracle的监听器了。在DBeaver这套文章的第1篇中,我就介绍了为什么要引入DBeaver?为了替换掉团队中现有的商用软件,比如大家连接mysql时,最喜欢使用的navicat。既然要替换掉navicat,那DBeaver就要满足na......