现在很多中间件都是容器化部署到k8s平台上,例如elasticsearch和mysql。一般的商业产品k8s都有针对这些中间的备份功能,但是如果我们要对这些容器的化的中间件导出数据进行备份,可以采用k8s的定时任务来执行
elasticdump定时任务
elasitcdump是一款开源的 ES 数据迁移工具,国内码云地址
https://gitee.com/AshitaKaze/elasticsearch-dump,有了它就能进行导出迁移
以下是k8s的定时任务
1、使用elasticdump镜像导出容器化的elasticsearch的索引,保存到宿主机指定目录
2、每天的00:30执行
3、导出指定索引
apiVersion: batch/v1
kind: CronJob
metadata:
name: cpaas-elasticsearch-dump
namespace: cpaas-system
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 20
jobTemplate:
metadata:
creationTimestamp: null
spec:
activeDeadlineSeconds: 7200
backoffLimit: 6
template:
metadata:
creationTimestamp: null
spec:
affinity: {}
containers:
- args:
- -ecx
#使用elasitcdump循环导出指定的索引
- |
suffix=`date +%Y%m%d`;
indices=( log-workload-$suffix log-system-$suffix log-kubernetes-$suffix log-platform-$suffix );
for index in ${indices[@]}; do elasticdump --limit=1000 --input $ELASTIC_PROTOCOL://$ELASTIC_USERNAME:$ELASTIC_PASSWORD@$ELASTIC_HOST/$index --direction=dump --output=/opt/$index.json; done
command:
- /bin/bash
#elasitc的配置,引入保密字典,作为环境变量注入
env:
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
key: username
name: elasticsearch-basic-auth
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: elasticsearch-basic-auth
- name: ELASTIC_PROTOCOL
value: http
- name: ELASTIC_HOST
value: cpaas-elasticsearch:9200
- name: INDEX_PREFIX
value: log-workload
envFrom:
- secretRef:
name: elasticsearch-basic-auth
#elasticdump镜像地址
image: elasticdump/elasticsearch-dump:latest
imagePullPolicy: IfNotPresent
name: elasticsearch-dump
resources:
limits:
cpu: 800m
memory: 800Mi
requests:
cpu: 800m
memory: 800Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
#保存路径
volumeMounts:
- mountPath: /opt
name: backup
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
#保存到宿主机上主机路径
volumes:
- hostPath:
path: /cpaas/esbackup
type: ""
name: backup
schedule: 30 00 * * *
successfulJobsHistoryLimit: 20
suspend: false
mysqldump定时任务
以下是mysqldump定时任务
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mysqldump
spec:
jobTemplate:
spec:
completions: 1
template:
spec:
restartPolicy: Never
volumes:
- name: mysql-master-script
hostPath:
path: /root/app/mysql/shell #定义备份脚本路径
- name: mysql-master-backup
hostPath:
path: /root/app/db/backup #保存到主机路径
- name: local-time
hostPath:
path: /etc/localtime
containers:
- name: mysqldump-container
image: nacos/nacos-mysql-master:latest
volumeMounts:
- name: mysql-master-script
mountPath: /var/db/script
- name: local-time
mountPath: /etc/localtime
- name: mysql-master-backup
mountPath: /var/db/backup #定义保存路径挂载
command:
- "sh"
- "/var/db/script/mysqldump.sh"
schedule: "30 00 * * *"
备份脚本
#!/bin/bash
#保存备份个数
number=3
#备份保存路径
backup_dir=/var/db/backup
#日期
dd=`date +%Y%m%d`
#备份工具
tool=/usr/bin/mysqldump
#用户名
username=root
#密码
password=root
#将要备份的数据库
database_name=test
#简单写法 mysqldump -u root -p123456 users > /root/mysqlbackup/users-$filename.sql
$tool -u $username -p$password -hmysql-master -P3306 --databases $database_name > $backup_dir/$database_name-$dd.sql
#写创建备份日志
echo "create $backup_dir/$database_name-$dd.sql" >> $backup_dir/log.txt
#找出需要删除的备份
delfile=`ls -l -crt $backup_dir/*.sql | awk '{print $9 }' | head -1`
#判断现在的备份数量是否大于$number
count=`ls -l -crt $backup_dir/*.sql | awk '{print $9 }' | wc -l`
if [ $count -gt $number ]
then
rm $delfile //删除最早生成的备份只保留number数量的备份
#写删除文件日志
echo "delete $delfile" >> $backup_dir/log.txt
fi
总结
现在都是容器化部署应用了,现在是要再容器化应用进行运维,备份导出思路跟之前还是基本一致
标签:name,ELASTIC,备份,elasticsearch,mysqldump,k8s,backup,elatiscdump,log From: https://blog.51cto.com/u_11555417/12056781