一、简介
Velero 是一款云原生时代的灾难恢复和迁移工具,采用 Go 语言编写,并在 github 上进行了开源,利用 velero 用户可以安全的备份、恢复和迁移 Kubernetes 集群资源和持久卷。
1.1 支持的版本列表
1.2 Velero组件
Velero 组件一共分两部分,分别是服务端和客户端。
- 服务端:运行在你 Kubernetes 的集群中
- 客户端:是一些运行在本地的命令行的工具,需要已配置好 kubectl 及集群 kubeconfig 的机器上
1.3 velero备份流程
- velero客户端调用kubernetes API Server创建backup任务
- Backup控制器基于watch机制通过Api Server获取到备份任务
- Backup控制器开始执行备份动作,会通过请求Api Server获取到需要备份的数据
- Backup 控制器将获取到的数据备份到指定的对象存储server端
1.4 Velero后端存储
Velero
支持两种关于后端存储的CRD
,分别是BackupStorageLocation
和VolumeSnapshotLocation
。
1.4.1 BackupStorageLocation
主要用来定义 Kubernetes 集群资源的数据存放位置,也就是集群对象数据,不是 PVC 的数据。主要支持的后端存储是 S3 兼容的存储,比如:Mino 和阿里云 OSS 等。
1.4.2 VolumeSnapshotLocation
主要用来给 PV 做快照,需要云提供商提供插件。阿里云已经提供了插件,这个需要使用 CSI 等存储机制。你也可以使用专门的备份工具 Restic
,把 PV 数据备份到阿里云 OSS 中去(安装时需要自定义选项)。
Restic 是一款 GO 语言开发的数据加密备份工具,顾名思义,可以将本地数据加密后传输到指定的仓库。支持的仓库有 Local、SFTP、Aws S3、Minio、OpenStack Swift、Backblaze B2、Azure BS、Google Cloud storage、Rest Server。
二、安装velero客户端
在 Github Release 页面下载指定的 velero 二进制客户端安装包,比如这里我们下载我们k8s集群对应的版本为 v1.11.1
版本列表:https://github.com/vmware-tanzu/velero/releases
2.1 安装velero命令程序
$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.11.1/velero-v1.11.1-linux-amd64.tar.gz
$ tar zxf velero-v1.11.1-linux-amd64.tar.gz
$ mv velero-v1.11.1-linux-amd64/velero /usr/bin/
$ velero -h
# 启用命令补全
$ source <(velero completion bash)
$ velero completion bash > /etc/bash_completion.d/velero
2.2 安装minio
Velero支持很多种存储插件,可查看:Velero Docs - Providers获取插件信息,我们这里使用minio作为S3兼容的对象存储提供程序。您也可以在任意地方部署Minio对象存储,只需要保证K8S集群可以访问到即可。
根据具体的业务场景选择不同的存储方式
# cat velero-v1.11.1-linux-amd64/examples/minio/00-minio-deployment.yaml
# Copyright 2017 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
- --console-address=:9001
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- name: api
port: 9000
targetPort: 9000
nodePort: 32000
- name: console
port: 9001
targetPort: 9001
nodePort: 32001
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
2.3 创建minio应用
# 创建velero命名空间
$ kubectl create namespace velero
# 创建minio资源
$ ]# kubectl apply -f velero-v1.11.1-linux-amd64/examples/minio/00-minio-deployment.yaml
# 查看部署状态
$ kubectl get sts,pod,svc -n velero
NAME READY STATUS RESTARTS AGE
pod/minio-78f994f86c-g86bp 1/1 Running 0 9m34s
pod/minio-setup-27jds 0/1 Completed 3 9m34s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio NodePort 10.68.151.238 <none> 9000:31657/TCP,9001:30282/TCP 9m34s
# 开放NodePort端口
$ kubectl patch svc minio -n velero -p '{"spec": {"type": "NodePort"}}'
$ kubectl patch svc minio -n velero --type='json' -p='[{"op": "replace", "path": "/spec/ports/0/nodePort", "value":9000},{"op": "replace", "path": "/spec/ports/1/nodePort", "value":9001}]'
]# kubectl get svc -n velero
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.68.151.238 <none> 9000:31657/TCP,9001:30282/TCP 3m12s
通过浏览器访问服务器IP:30282
,并使用账号minio
密码minio123
登入验证。
2.4 物理机安装minio
当然如果需要在不同 Kubernetes 和存储池集群备份与恢复数据,需要将 minio 服务端安装在 Kubernetes 集群外,保证在集群发生灾难性故障时,不会对备份数据产生影响,可以通过二进制的方式进行安装。
在待安装 minio 的服务器上下载二进制包
➜ ~ wget https://dl.minio.io/server/minio/release/linux-amd64/minio
➜ ~ chmod +x minio
➜ ~ sudo mv minio /usr/local/bin/
➜ ~ minio --version
准备对象存储的磁盘,这里我们跳过该步骤,可以使用 systemd 来方便管理 minio 服务,对于使用 systemd init 系统运行系统的人,请创建用于运行 minio 服务的用户和组:
➜ ~ sudo groupadd --system minio
➜ ~ sudo useradd -s /sbin/nologin --system -g minio minio
为 /data
(上述步骤准备好的磁盘挂载位置)目录提供 minio 用户所有权:
➜ ~ sudo chown -R minio:minio /data/
为 minio 创建 systemd 服务单元文件:
➜ ~ vi /etc/systemd/system/minio.service
[Unit]
Description=Minio
Documentation=https://docs.minio.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio
[Service]
WorkingDirectory=/data
User=minio
Group=minio
EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
# Let systemd restart this service always
Restart=always
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
创建 minio 环境文件 /etc/default/minio
:
# Volume to be used for Minio server.
MINIO_VOLUMES="/data"
# Use if you want to run Minio on a custom port.
MINIO_OPTS="--address :9000"
# Access Key of the server.
MINIO_ACCESS_KEY=minio
# Secret key of the server.
MINIO_SECRET_KEY=minio123
其中 MINIO_ACCESS_KEY
为长度至少为3个字符的访问密钥,MINIO_SECRET_KEY
为最少8个字符的密钥。重新加载 systemd 并启动 minio 服务:
➜ ~ sudo systemctl daemon-reload
➜ ~ sudo systemctl start minio
关于 minio 的更多使用方法可以参考官方文档 https://docs.min.io/ 了解更多。
三 、安装 velero 服务端
我们可以使用 velero 客户端来安装服务端,也可以使用 Helm Chart 来进行安装,比如这里我们用客户端来安装,velero 命令默认读取 kubectl 配置的集群上下文,所以前提是 velero 客户端所在的节点有可访问集群的 kubeconfig 配置。
3.1 创建密钥
首先准备密钥文件,在当前目录建立一个空白文本文件,内容如下所示:
$ cat > credentials-velero <<EOF
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
EOF
3.2 安装velero到k8s集群
替换为之前步骤中 minio 的对应 access key id 和 secret access key如果 minio 安装在 kubernetes 集群内时按照如下命令安装 velero 服务端:
$ velero install \
--provider aws \
--image velero/velero:v1.11.1 \
--plugins velero/velero-plugin-for-aws:v1.7.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-node-agent \
--use-volume-snapshots=false \
--namespace velero \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio:9000 \
--wait
# 执行install命令后会创建一系列清单,包括CustomResourceDefinition、Namespace、Deployment等。
# velero install .... --dry-run -o yaml > velero_deploy.yaml 如果为私仓,可以通过--dry-run 导出 YAML 文件调整在应用。
# 可使用如下命令查看运行日志
$ kubectl logs deployment/velero -n velero
# 查看velero创建的api对象
$ kubectl api-versions | grep velero
velero.io/v1
# 查看备份位置
$ velero backup-location get
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws velero Available 2023-03-28 15:45:30 +0800 CST ReadWrite true
#卸载命令
# velero uninstall --namespace velero
You are about to uninstall Velero.
Are you sure you want to continue (Y/N)? y
Waiting for velero namespace "velero" to be deleted
............................................................................................................................................................................................
Velero namespace "velero" deleted
Velero uninstalled ⛵
选项说明:
--kubeconfig
(可选):指定kubeconfig
认证文件,默认使用.kube/config
;--provider
:定义插件提供方;--image
:定义运行velero的镜像,默认与velero客户端一致;--plugins
:指定使用aws s3兼容的插件镜像;--bucket
:指定对象存储Bucket桶名称;--secret-file
:指定对象存储认证文件;--use-node-agent
:创建Velero Node Agent守护进程,托管FSB模块;--use-volume-snapshots
:是否启使用快照;--namespace
:指定部署的namespace名称,默认为velero;--backup-location-config
:指定对象存储地址信息;
3.3 卸载velero
如果您想从集群中完全卸载Velero,则以下命令将删除由velero install
创建的所有资源:
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
四、备份与恢复
备份命令:velero create backup NAME [flags]
backup选项:
--exclude-namespaces stringArray
: 要从备份中排除的名称空间--exclude-resources stringArray
: 要从备份中排除的资源,如storageclasses.storage.k8s.io
--include-cluster-resources optionalBool[=true]
: 包含集群资源类型--include-namespaces stringArray
: 要包含在备份中的名称空间(默认'*')--include-resources stringArray
: 备份中要包括的资源--labels mapStringString
: 给这个备份加上标签-o, --output string
: 指定输出格式,支持'table'、'json'和'yaml';-l, --selector labelSelector
: 对指定标签的资源进行备份--snapshot-volumes optionalBool[=true]
: 对 PV 创建快照--storage-location string
: 指定备份的位置--ttl duration
: 备份数据多久删掉--volume-snapshot-locations strings
: 指定快照的位置,也就是哪一个公有云驱动
4.1 备份
4.1.1 使用官方案例创建测试应用
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
# 查看资源清单
$ kubectl get all -n nginx-example
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-5c844b66c8-7fsb9 0/1 ContainerCreating 0 32s
pod/nginx-deployment-5c844b66c8-dh4bn 1/1 Running 0 4m53s
pod/nginx-deployment-5c844b66c8-q2dc5 0/1 Terminating 0 4m53s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.68.79.207 <pending> 80:32435/TCP 4m53s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/2 2 1 4m53s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-5c844b66c8 2 2 1 4m53s
4.1.2 备份测试应用
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
选项:
--include-namespaces
:指定命名空间--selector
:标签选择器,如app=nginx
4.1.3 查看备份列表
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 0 0 2024-04-06 21:47:16 +0800 CST 29d default <none>
# 查看备份详细信息
$ velero backup describe nginx-backup
# 查看备份日志
$ velero backup logs nginx-backup
登入minio控制台查看备份内容
4.1.4 定时备份指南
# 使用cron表达式备份
$ velero schedule create nginx-daily --schedule="0 1 * * *" --include-namespaces nginx-example
# 使用一些非标准的速记 cron 表达式
$ velero schedule create nginx-daily --schedule="@daily" --include-namespaces nginx-example
# 手动触发定时任务
$ velero backup create --from-schedule nginx-daily
更多cron示例请参考:cron package’s documentation
4.2 恢复
4.2.1 模拟灾难
# 删除nginx-example命名空间和资源
$ kubectl delete namespace nginx-example
# 检查是否删除
$ kubectl get all -n nginx-example
No resources found in nginx-example namespace.
4.2.2 恢复资源
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 0 0 2024-04-06 21:47:16 +0800 CST 29d default <none>
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20240406215611" submitted successfully.
Run `velero restore describe nginx-backup-20240406215611` or `velero restore logs nginx-backup-20240406215611` for more details.
4.2.3 检查恢复的资源
$ velero restore get
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
nginx-backup-20240406215611 nginx-backup Completed 2024-04-06 21:56:11 +0800 CST 2024-04-06 21:56:12 +0800 CST 0 2 2024-04-06 21:56:11 +0800 CST <none>
# 查看详细信息
$ velero restore describe nginx-backup-20240406215611
# 检查资源状态
$ kubectl get all -n nginx-example
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-5c844b66c8-7fsb9 1/1 Running 0 67s
pod/nginx-deployment-5c844b66c8-dh4bn 1/1 Running 0 67s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.68.171.131 <pending> 80:30635/TCP 67s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2/2 2 2 67s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-5c844b66c8 2 2 2 67s
五、项目迁移实战
5.1、项目介绍
我们将使用Velero
快速完成云原生应用及PV数据的迁移实践过程,在本文示例中,我们将A集群中的一个MOS应用迁移到集群B中,其中数据备份采用自建Minio对象存储服务。
5.1.1 环境要求
- 迁移项目最好保证两个Kubernetes集群版本一致。
- 为了保证PV数据成功迁移,两边需要安装好相同名字的
StorageClass
。 - 可以自己部署Minio,也可以使用公有云的对象存储服务,如华为的OBS、阿里的OSS等。
- 本案例将集群A中app-system命名空间中的服务及PV数据迁移到集群B中。
5.1.2 项目环境
角色 | 集群IP | 集群版本 | 部署软件 |
---|---|---|---|
K8S 集群A | 192.168.19.128 192.168.19.129 192.168.19.130 192.168.19.131 |
v1.23.1 | velero、istio-system |
K8S 集群B | 192.168.19.140 192.168.19.141 192.168.19.142 |
v1.23.17 | velero、minio |
5.1.3 项目说明
我们需要将集群A中 istio-system
空间的所有资源和数据全部迁移到集群B中,该项目包括了deployment
、statefulset
、service
、ingress
、job
、cronjob
、secret
、configmap
、pv
、pvc
。
# 项目清单信息
$ kubectl get deployment,sts,pvc -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 62d
deployment.apps/istio-egressgateway 1/1 1 1 62d
deployment.apps/istio-ingressgateway 1/1 1 1 62d
deployment.apps/istiod 1/1 1 1 62d
deployment.apps/jaeger 1/1 1 1 62d
deployment.apps/kiali 1/1 1 1 62d
deployment.apps/prometheus 1/1 1 1 62d
5.2 准备对象存储
按照2.2和2.3的方法在集群B(192.168.19.140)中创建minio
应用,用来存放备份数据。
$ kubectl get deployment,svc -n velero
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio 1/1 1 1 5m27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio NodePort 10.96.3.195 <none> 9000:32000/TCP,9001:32001/TCP 4m44s
5.3 安装velero
请确保在集群A和集群B中已经安装好velero客户端,请参考2.1 安装velero命令程序
5.3.1 在集群A中安装velero服务
$ cat > credentials-velero <<EOF
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
EOF
$ velero install \
--provider aws \
--image velero/velero:v1.11.1 \
--plugins velero/velero-plugin-for-aws:v1.7.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-node-agent \
--use-volume-snapshots=false \
--namespace velero \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://192.168.19.140:32000 \
--wait
注意:其中S3
的地址指向集群B(192.168.19.140)的minio对象存储。
5.3.2 在集群B种安装velero服务
$ cat > credentials-velero <<EOF
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
EOF
$ velero install \
--provider aws \
--image velero/velero:v1.11.1 \
--plugins velero/velero-plugin-for-aws:v1.7.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-node-agent \
--use-volume-snapshots=false \
--namespace velero \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio:9000 \
--wait
注意:其中S3
的地址指向本集群minio对象存储的svc地址。
5.4 备份MOS项目
$ velero backup create istio-backup \
--default-volumes-to-fs-backup \
--include-namespaces istio-system
Backup request "istio-backup" submitted successfully.
Run `velero backup describe istio-backup` or `velero backup logs istio-backup` for more details.
# 查看备份状态;可以看到状态正在进行中
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
istio-backup InProgress 0 0 2024-04-06 22:33:36 +0800 CST 29d default <none>
# 等待状态为Completed
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
istio-backup Completed 0 0 2024-04-06 22:33:36 +0800 CST 29d default <none>
--default-volumes-to-fs-backup
:默认将所有PV卷进行备份,详情查看官方文档。--include-namespaces
:指定要备份的命名空间
登入minio控制台上可以看到备份的文件:
5.5 恢复到集群B
# 到集群B中查看备份资源
$ velero backup get
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
istio-backup Completed 0 0 2024-04-06 22:33:36 +0800 CST 29d default <none>
# 执行恢复命令
$ velero restore create --from-backup istio-backup
Restore request "istio-backup-20240406224033" submitted successfully.
Run `velero restore describe istio-backup-20240406224033` or `velero restore logs istio-backup-20240406224033` for more details.
# 查看恢复任务; 可以看到任务正在进行中,等待为Completed表示恢复完成
$ velero restore get
NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
istio-backup-20240406224033 istio-backup InProgress 2024-04-06 22:40:33 +0800 CST <nil> 0 0 2024-04-06 22:40:33 +0800 CST <none>
5.6 验证服务和数据
$ kubectl get all -n istio-system
NAME READY STATUS RESTARTS AGE
pod/grafana-b854c6c8-cn244 1/1 Running 0 7m50s
pod/istio-egressgateway-85df6b84b7-ccqmg 1/1 Running 0 7m50s
pod/istio-ingressgateway-58ffd5967-9kqf7 1/1 Running 0 7m49s
pod/istiod-8d74787f-7skg6 1/1 Running 0 7m49s
pod/jaeger-5556cd8fcf-5f75s 1/1 Running 0 7m49s
pod/kiali-648847c8c4-rztws 1/1 Running 0 7m49s
pod/prometheus-7b8b9dd44c-nzff4 2/2 Running 0 7m49s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.96.3.122 <none> 3000/TCP 7m47s
service/istio-egressgateway ClusterIP 10.96.0.74 <none> 80/TCP,443/TCP 7m47s
service/istio-ingressgateway LoadBalancer 10.96.2.36 <pending> 15021:30083/TCP,80:32311/TCP,443:32260/TCP,31400:30872/TCP,15443:32641/TCP,31401:31020/TCP 7m47s
service/istiod ClusterIP 10.96.2.246 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 7m47s
service/jaeger-collector ClusterIP 10.96.1.101 <none> 14268/TCP,14250/TCP,9411/TCP 7m47s
service/kiali ClusterIP 10.96.3.3 <none> 20001/TCP,9090/TCP 7m47s
service/prometheus ClusterIP 10.96.1.113 <none> 9090/TCP 7m47s
service/tracing NodePort 10.96.1.187 <none> 80:31721/TCP,16685:30582/TCP 7m47s
service/zipkin ClusterIP 10.96.1.44 <none> 9411/TCP 7m46s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 7m45s
deployment.apps/istio-egressgateway 1/1 1 1 7m45s
deployment.apps/istio-ingressgateway 1/1 1 1 7m45s
deployment.apps/istiod 1/1 1 1 7m45s
deployment.apps/jaeger 1/1 1 1 7m45s
deployment.apps/kiali 0/1 1 0 7m45s
deployment.apps/prometheus 1/1 1 1 7m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-b854c6c8 1 1 1 7m49s
replicaset.apps/istio-egressgateway-85df6b84b7 1 1 1 7m49s
replicaset.apps/istio-ingressgateway-58ffd5967 1 1 1 7m49s
replicaset.apps/istio-ingressgateway-6bb8fb6549 0 0 0 7m49s
replicaset.apps/istiod-8d74787f 1 1 1 7m49s
replicaset.apps/jaeger-5556cd8fcf 1 1 1 7m48s
replicaset.apps/kiali-648847c8c4 1 1 0 7m48s
replicaset.apps/prometheus-7b8b9dd44c 1 1 1 7m48s
标签:Velero,minio,velero,--,备份,nginx,deployment,K8s,backup
From: https://www.cnblogs.com/nf01/p/18118159