首页 > 其他分享 >Kubernetes-etcd备份恢复

Kubernetes-etcd备份恢复

时间:2024-09-06 11:52:03浏览次数:9  
标签:kubernetes Kubernetes ETCDCTL 192.168 etc etcd pki 备份

目录

使用etcdctl备份与恢复

简介

**etcdctl** 是用于管理和操作 etcd 数据库的命令行工具。etcd 是一个高可用的分布式键值存储系统,广泛用于存储和管理配置数据。etcdctl 提供了备份和恢复 etcd 数据库的功能,这对于数据保护和系统迁移至关重要

本次 etcd 集群由三个节点组成,采用高可用配置。备份操作只需在任意一个节点上进行,因为所有节点的数据是同步的。然而,恢复数据时需要在每个节点上进行恢复操作。

集群信息

安装方式 版本
kubeadm 1.23.17

etcdctl安装

下载

wget https://gh.monlor.com/https://github.com/etcd-io/etcd/releases/download/v3.4.30/etcd-v3.4.30-linux-amd64.tar.gz

安装

tar -zxf etcd-v3.4.30-linux-amd64.tar.gz
mv etcd-v3.4.30-linux-amd64/etcdctl /usr/local/bin
chmod +x /usr/local/bin/

同步到其他节点

scp -r  /usr/local/bin/etcdctl  master02:/usr/local/bin/
scp -r  /usr/local/bin/etcdctl  master03:/usr/local/bin/

配置环境变量

vi ~/.bashrc
export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/peer.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/peer.key
export ETCDCTL_ENDPOINTS=192.168.1.31:2379,192.168.1.32:2379,192.168.1.33:2379

source ~/.bashrc

查看集群状态

ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt \
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/peer.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/peer.key \
ETCDCTL_ENDPOINTS=192.168.1.161:2379,192.168.1.162:2379,192.168.1.163:2379 \
etcdctl --write-out=table endpoint health

#执行结果
+--------------------+--------+-------------+-------+
|      ENDPOINT      | HEALTH |    TOOK     | ERROR |
+--------------------+--------+-------------+-------+
| 192.168.1.162:2379 |   true | 14.612588ms |       |
| 192.168.1.161:2379 |   true | 21.240783ms |       |
| 192.168.1.163:2379 |   true | 20.533771ms |       |
+--------------------+--------+-------------+-------+

查看所有key

ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt \
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/peer.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/peer.key \
ETCDCTL_ENDPOINTS=192.168.1.161:2379,192.168.1.162:2379,192.168.1.163:2379 \
etcdctl get /  --prefix --keys-only

查看指定key

ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt \
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/peer.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/peer.key \
ETCDCTL_ENDPOINTS=192.168.1.161:2379,192.168.1.162:2379,192.168.1.163:2379 \
etcdctl get /registry/namespaces/default

备份

所有节点创建备份目录


mkdir -p /opt/etcd_backup/

备份etcd数据

注意:etcdctl 进行快照操作时必须连接到 一个特定的 etcd 节点,而不是多个节点 ,否则会出现 snapshot must be requested to one selected node”的错误 如下所示

指定单个节点备份

ETCDCTL_API=3 \
ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt \
ETCDCTL_CERT=/etc/kubernetes/pki/etcd/peer.crt \
ETCDCTL_KEY=/etc/kubernetes/pki/etcd/peer.key \
ETCDCTL_ENDPOINTS=192.168.1.161:2379 \
etcdctl snapshot save /opt/etcd_backup/snap-etcd-$(date +%F-%H-%M-%S).db

快照已成功保存到指定路径 /opt/etcd_backup/snap-etcd-2024-09-06-10-21-42.db

恢复

删除资源

我们删除掉defualt下和ops下的资源

[root@master01 ~]# kubectl delete deployments.apps  nginx
deployment.apps "nginx" deleted
[root@master01 ~]# kubectl -n ops delete deployments.apps redis-single 
deployment.apps "redis-single" deleted

所有master节点停止etcd

mv /etc/kubernetes/manifests/etcd.yaml /home/

所有master节点备份原有数据

[root@master01 home]# mv /var/lib/etcd/ /var/lib/etcd-$(date +%F-%H-%M-%S)-backup/
[root@master01 home]# ls /var/lib/etcd-2024-09-06-10-45-19-backup
member

master01恢复

ETCDCTL_API=3 etcdctl snapshot restore  /opt/etcd_backup/snap-etcd-2024-09-06-11-12-02.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt  \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key  \
--data-dir=/var/lib/etcd/   \
--endpoints=https://127.0.0.1:2379 \
--initial-cluster=master01=https://192.168.1.161:2380,master02=https://192.168.1.162:2380,master03=https://192.168.1.163:2380 \
--name=master01 \
--initial-advertise-peer-urls=https://192.168.1.161:2380

恢复成功如下

[root@master01 ~]# ls /var/lib/etcd
member

发送快照文件到其他master节点

scp  /opt/etcd_backup/snap-etcd-2024-09-06-11-12-02.db   master02:/opt/etcd_backup/
scp  /opt/etcd_backup/snap-etcd-2024-09-06-11-12-02.db   master03:/opt/etcd_backup/

master02恢复

注意修改主机名称和urls地址

ETCDCTL_API=3 etcdctl snapshot restore  /opt/etcd_backup/snap-etcd-2024-09-06-11-12-02.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt  \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key  \
--data-dir=/var/lib/etcd/   \
--endpoints=https://127.0.0.1:2379 \
--initial-cluster=master01=https://192.168.1.161:2380,master02=https://192.168.1.162:2380,master03=https://192.168.1.163:2380 \
--name=master02 \
--initial-advertise-peer-urls=https://192.168.1.162:2380

master03恢复

ETCDCTL_API=3 etcdctl snapshot restore  /opt/etcd_backup/snap-etcd-2024-09-06-11-12-02.db \
--cacert=/etc/kubernetes/pki/etcd/ca.crt  \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key  \
--data-dir=/var/lib/etcd/   \
--endpoints=https://127.0.0.1:2379 \
--initial-cluster=master01=https://192.168.1.161:2380,master02=https://192.168.1.162:2380,master03=https://192.168.1.163:2380 \
--name=master03 \
--initial-advertise-peer-urls=https://192.168.1.163:2380

所有节点启动etcd

mv /home/etcd.yaml  /etc/kubernetes/manifests/

验证

可以看到我们删除的资源已经恢复了

增加备份脚本

[root@master01 scpipt]# cat etcd_backup.sh 
#!/bin/bash

# 定义环境变量
ETCDCTL_API=3
ETCD_CACERT="/etc/kubernetes/pki/etcd/ca.crt"
ETCD_CERT="/etc/kubernetes/pki/etcd/peer.crt"
ETCD_KEY="/etc/kubernetes/pki/etcd/peer.key"
ETCD_ENDPOINTS="192.168.1.161:2379"  # 可以根据需要添加更多端点

# 定义备份目录和文件名
BACKUP_DIR="/opt/etcd_backup"
BACKUP_FILE="snap-etcd-$(date +%F-%H-%M-%S).db"
BACKUP_PATH="${BACKUP_DIR}/${BACKUP_FILE}"

# 创建备份目录(如果不存在)
mkdir -p "${BACKUP_DIR}"

# 执行备份操作
ETCDCTL_CACERT="${ETCD_CACERT}" \
ETCDCTL_CERT="${ETCD_CERT}" \
ETCDCTL_KEY="${ETCD_KEY}" \
ETCDCTL_ENDPOINTS="${ETCD_ENDPOINTS}" \
etcdctl snapshot save "${BACKUP_PATH}"

# 检查备份是否成功
if [ $? -eq 0 ]; then
    echo "Backup successfully created at ${BACKUP_PATH}"
else
    echo "Backup failed"
    exit 1
fi

每周三晚上12点执行备份脚本:

[root@master01 scpipt]# crontab -e
0 0 * * 4 /scpipt/etcd_backup.sh

标签:kubernetes,Kubernetes,ETCDCTL,192.168,etc,etcd,pki,备份
From: https://www.cnblogs.com/Unstoppable9527/p/18399967

相关文章

  • etcd集群新增节点和移除节点
    etcd集群新增节点和删除节点现在的集群信息是: 新增节点1、下载etcd二进制包wgethttps://github.com/etcd-io/etcd/releases/download/v3.5.15/etcd-v3.5.15-linux-amd64.tar.gz2、创建etcd目录mkdir-p/data/etcd/{data,ssl,bin}3、解压并......
  • 记一次阿里云搭建K8S在恢复镜像快照之后etcd一个节点无法启动问题
    环境查看系统环境#cat/etc/redhat-releaseCentOSLinuxrelease7.9.2009(Core)#uname-aLinuxCentOS7K8SMaster010051013.10.0-1160.114.2.el7.x86_64#1SMPWedMar2015:54:52UTC2024x86_64x86_64x86_64GNU/Linux软件环境#kubectlversionClientVe......
  • Kubernetes容器生命周期详解:PostStart和PreStop应用案例解析
    1.容器启动命令:容器启动命令指在容器启动时需要执行的命令。通过设置ENTRYPOINT或CMD,可以自定义容器启动时执行的进程。使用了一个简单的Dockerfile来设置ENTRYPOINT命令:FROMubuntuENTRYPOINT["top","-b"]该命令告诉容器启动时运行top命令,并且以-b选项进行批量模式......
  • D17 kubernetes Pod生命周期
    1、创建pod当创建一个pod时,它是通过多个组件来完成的假设通过kubeletrunnginx--image=ningx命令创建一个pod,其工作流程如下:1、kubectl向APIserver发起创建pod的请求,请求中包含pod的配置信息2、APIserver接收到请求后,校验字段合法性,例如格式、镜像地址不能为空等,校验通......
  • D14 kubernetes 容器服务质量和容器环境变量
    1、容器服务质量 服务质量(qualityofServices,QoS),是kubernetes用于对pod的进行优先级划分的一种机制。通过QoS,kubernetes将pod划分为3个等级。如下所示Guaranteed 优先级最高 pod中每个容器都被设置了CPU/内存的资源请求和资源限制,并且资源请求的值与资源限制的值相等Burstabl......
  • D16 kubernetes 容器生命周期回调
    1、简介容器生命周期回调是指在容器的生命周期中执行用户定义的操作。kubernetes支持以下生命周期回调PostStart(容器启动后):在容器启动后立即执行的回调,它可以用于执行一些初始化任务PreStop(容器停止前):在容器停止之前执行的回调。它可以用于执行清理或保存状态的操作......
  • 如何将 Galaxy S23 备份到计算机或云存储?
    这些年来,您的三星GalaxyS23上一定有很多您不想丢失的重要数据。购买新手机时或将SamsungGalaxyS23送往维修店之前,您可能需要备份SamsungGalaxyS23。如果您想知道如何备份GalaxyS23,那么您来对地方了。在5个最简单的解决方案中,您一定会找到轻松备份SamsungGalaxy......
  • D15 kubernetes 初始化容器(initContainers)
    初始化容器(initContainers)是Pod中一种特殊类型的容器,专用于在主容器启动之前执行一些初始化任务和操作,以满足主容器所需的环境。 初始化容器在整个pod的生命周期内仅运行一次,并且在主容器启动之前完成它们的任务,既初始化容器一旦任务完成,就必须退出。初始化容器有以下应用场......
  • Kubernetes学习指南:保姆级实操手册06——部署kubernetes集群
    Kubernetes学习指南:保姆级实操手册06——部署kubernetes集群1、配置YUM源###在所有Master节点执行#配置yum源cat>/etc/yum.repos.d/kubernetes.repo<<EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x8......
  • 最新版Kubernetes部署教程v1.31.0
    最新版Kubernetes高可用部署教程v1.31.0系统:Almalinux9架构:192.168.100.10control-plane-endpoint.k8s.localcontrol-plane-endpoint#负载均衡服务器,配置负载均衡后修改ip后续会讲解192.168.100.10masterA.k8s.localmasterA192.168.100.20masterB.k8s.localma......