一、etcd配置说明
Etcd 是一个分布式键值存储系统,用于配置管理和服务发现。它通常用于为分布式系统提供关键数据的一致性和高可用性。Etcd 的配置文件通常是一个 YAML 格式的文件,包含了一系列的参数和设置,用于调整 Etcd 服务器的行为。
k8s的etcd 走的https,使用了证书,证书位置如下配置中路径
etcd集群部署: https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#multi-node-etcd-cluster
[root@ localhost]# cat /etc/systemd/system/etcd.service
...
[Service]
Type=notify
WorkingDirectory=/data/kube/etcd/
ExecStart=/data/kube/bin/etcd \
--name=etcd1 \
--cert-file=/etc/etcd/ssl/etcd.pem \ # 证书
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/cluster/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/cluster/ssl/ca.pem \
...
-
参数说明
参数名称 | 使用说明 |
name | 定义了 Etcd 节点的名称 |
data-dir | 指定 Etcd 存储数据的目录路径 |
wal-dir |
指定 Etcd 的 Write-Ahead Logging (WAL) 日志文件存放的目录路径 |
snapshot-count |
定义了触发新的快照之前,可以进行 多少次事务提交 |
heartbeat-interval | 定义了 Etcd 节点之间发送心跳的间隔时间 |
election-timeout | 定义了 Etcd 领导者选举的超时时间 |
listen-peer-urls | 定义了 Etcd 节点之间通信的 URL 列表 |
listen-client-urls | 定义了 Etcd 节点对外提供服务的 URL 列表 |
initial-advertise-peer-urls | 定义了 Etcd 节点在加入集群时,用于自我宣传的对等 URL |
advertise-client-urls | 定义了 Etcd 节点对外宣传的客户端 URL |
initial-cluster | 定义了 Etcd 集群的初始状态,包括集群中的所有节点及其名称 |
initial-cluster-token | 定义了 Etcd 集群的初始令牌,用于集群的初始化 |
initial-cluster-state | 定义了 Etcd 集群的状态,可以是 new(新集群)或 existing(现有集群) |
auto-compaction-retention=1 |
在一个小时内为mvcc键值存储的自动压实保留。 0表示禁用自动碎片整理 |
max-request-bytes=10485760 |
消息最大字节数,ETCD默认该值为1.5M,注意: etcd容器使用的版本如果小于3.2.10,此参数不能 加,低于这个版本的不支持此参数 |
max-snapshots | 定义了允许的最大快照数量 |
max-wals | 定义了允许的最大 WAL 文件数量 |
-
etcd端口信息
端口 | 作用 |
2380端口 | 伙伴通信,例如集群通信,交换选举等信息 |
2379端口 | 2379是http协议的RESTful接口,用于客户端连接 |
二、etcd高可用说明
在 etcd 集群中,领导者选举是 Raft 协议的一部分,用于在当前领导者失败或无法与集群中的其他节点通信时选出新的领导者。
1.节点失去领导者
{"level":"info","ts":"2024-05-07T01:54:04.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9afce9447872453 lost leader 5ee9c643fc08f96b at term 52"}
这条日志表明节点 9afce9447872453 丢失了它的领者 5ee9c643fc08f96b,这通常发生在领导者无法响应其他节点的心跳或请求时。
2.开始新的选举
{"level":"info","ts":"2024-05-07T01:54:04.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9afce9447872453 is starting a new election at term 52"}
由于丢失了领导者,节点 9afce9447872453 开始了一个新的选举过程。在 Raft 协议中,选举过程包括预投票(PreVote)和投票(Vote)阶段。
3.节点成为候选者
{"level":"info","ts":"2024-05-07T01:54:04.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9afce9447872453 became pre-candidate at term 52"}
节点 9afce9447872453 成为了一个预候选者(pre-candidate),这是选举过程的第一步,节点会请求其他节点的预投票。
4.发送预投票请求
{"level":"info","ts":"2024-05-07T01:54:04.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9afce9447872453 [logterm: 52, index: 456617084] sent MsgPreVote request to 2fa50bf947c1df3a at term 52"}
节点 9afce9447872453 向另一个节点 2fa50bf947c1df3a 发送了一个预投票请求。这是为了确认是否有其他节点已经在一个更高的任期(term)上。
5.接收预投票响应
{"level":"info","ts":"2024-05-07T01:54:04.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9afce9447872453 received MsgPreVoteResp from 9afce9447872453 at term 52"}
节点 9afce9447872453 接收到了它自己的预投票响应,这是预投票过程的一部分。
6.选举超时
{"level":"warn","ts":"2024-05-07T01:54:17.333Z","caller":"etcdserver/v3_server.go:852","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
这个警告表明在尝试进行线性化读取时,节点在等待 ReadIndex 响应时超时了。这可能是由于网络问题导致的通信延迟,也可能是由于节点性能问题。
7.健康检查失败
{"level":"warn","ts":"2024-05-07T01:54:17.332Z","caller":"etcdhttp/metrics.go:173","msg":"serving /health false; no leader"}
这条日志表明 etcd 集群的健康状况检查失败,原因是没有领导者。这通常意味着集群无法处理请求,直到新的领导者被选举出来。
在 Raft 协议中,如果一个节点在一段时间内没有收到领导者的消息,它就会认为自己是孤立的,并开始新的领导者选举过程。这个过程包括增加任期计数,并尝试通过预投票和投票阶段获得集群中大多数节点的支持。如果一个节点成功地获得了足够的投票,它就会成为新的领导者,并开始接受和处理请求。
解决领导者选举问题通常需要确保 etcd 集群中的所有节点都可以相互通信,并且没有任何网络分区或其他通信障碍。此外,可能需要检查节点的配置和系统资源,以确保它们可以正常运行。
在etcd集群中,当发生选举过程时
会导致写请求无法被处理。这是因为在选举期间,没有确定的leader节点来处理写请求。选举过程将导致系统暂时无法处理新的写操作,直到新的leader节点选举完成。
触发选举的条件是,leader节点每隔一定时间(如100ms)向所有follower节点发送心跳信号。如果有一个或多个follower节点在规定时间内未能接收到心跳信号,将触发选举过程。这确保了etcd集群中只要有一个follower节点无法与leader节点保持正常的心跳通信,就会启动选举以选择一个新的leader节点来维护集群的稳定性。
三、etcd常见运维命令
-
etcd 数据操作
# 创建key
root [/var/lib/etcd ]# etcdctl put /zhang san
OK
# 查询key
root [/var/lib/etcd ]# etcdctl get/zhang
/zhang
san
# 查询所有key,只显示key
root [/var/lib/etcd ]# etcdctl get""--prefix --keys-only
/abc
/zhang
# 查询所有key,显示value
root [ /var/lib/etcd ]# etcdctl get "" --prefix
/abc
a=1
b=2
/zhang
san
-
删除修改操作
# 修改key操作,使用put,value前后使用单引号
root [/var/lib/etcd ]# etcdctl put /zhang '
> a=1
> b=2
> '
OK
root [/var/lib/etcd ]# etcdctl get/zhang --prefix
/zhang
a=1
b=2
# 删除某个key
root [/var/lib/etcd ]# etcdctl del/zhang 1
-
etcd集群相关操作
#查看集群成员列表,列出 etcd 集群中的所有成员及其状态。
etcdctl member list
#添加一个节点到集群
etcdctl member add <name><peer-url>
#从集群中删除一个节点
etcdctl member remove <member-id>
#查看集群健康状态
etcdctl endpoint health
检查每个 etcd 端点的健康状态。
#查看集群存储空间,查看集群中每个节点的状态,包括存储大小、哈希等
etcdctl endpoint status --write-out=table
#查看告警信息
etcdctl alarm list
#清除告警:
etcdctl alarm disarm
-
etcd数据备份恢复命令
# 创建快照
etcdctl snapshot save /path/to/backup.db
# 恢复快照
etcdctl snapshot restore /path/to/backup.db --data-dir /new/data/dir
-
检查集群状态
etcdctl endpoint health
[root@ly 17:19 ~]# etcdctl --endpoints=http://172.26.144.119:2379 endpoint health
http://172.26.144.119:2379 is healthy: successfully committed proposal: took = 2.906218ms
检查成员信息
root [ /var/lib/etcd ]# etcdctl member list
51bca066d01d6698, started, etcd3, http://10.119.49.169:2380, http://10.226.49.169:2379, false
7ef357413fd1c123, started, etcd2, http://10.119.49.168:2380, http://10.226.49.168:2379, false
c863b58837d07726, started, etcd1, http://10.119.48.116:2380, http://10.226.48.116:2379, false
-
ENDPOINT:etcd 实例的访问端点。
-
ID:etcd 实例的唯一标识符。
-
VERSION:etcd 实例的版本号。
-
DB SIZE:etcd 数据库的大小。
-
IS LEADER:该实例是否是当前集群的领导者。
-
RAFT TERM:当前的 Raft 任期。
-
RAFT INDEX:当前的 Raft 日志索引。
-
RAFT APPLIED INDEX:已经应用的 Raft 日志索引。
-
ERRORS:任何错误信息。
-
查看集群状态
包含etcd空间占用大小,leader状态
# 查看集群主从状态
root [ /var/lib/etcd ]# etcdctl endpoint status --cluster --write-out=table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| http://10.119.49.169:2379 | 51bca066d01d6698 | 3.5.16 | 20 kB | true | false | 4 | 87 | 87 | |
| http://10.119.49.168:2379 | 7ef357413fd1c123 | 3.5.16 | 20 kB | false | false | 4 | 87 | 87 | |
| http://10.119.48.116:2379 | c863b58837d07726 | 3.5.16 | 20 kB | false | false | 4 | 87 | 87 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
四、常见运维事件
实践一、etcd集群成员的添加/删除
如下场景: 用户服务异常,经排查发现etcd的服务日志中报"request cluster ID mismatch"类似信息,此时可知该节点想要加入的集群id与实际集群id已然不同,etcd集群状态已经异常
通过如下命令查看成员信息,可知etcd1节点状态异常,客户端地址收集失效
shell> etcdctl --endpoints=http://172.26.238.109:2379 member list --write-out=table
+------------------+---------+-------+---------------------------+-------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+---------------------------+-------------------------+------------+
| 3f46dd6c20547bdc | started | etcd3 | http://etcd3.default:2380 | http://172.27.0.3:2379 | false |
| 5e33781cf341b138 | started | etcd2 | http://etcd2.default:2380 | http://172.27.1.4:2379 | false |
| 7ddd2057b06daab9 | unstarted | etcd1 | http://etcd1.default:2380 | | false |
+------------------+---------+-------+---------------------------+-------------------------+------------+
重启etcd1服务、重装etcd集群,仍不能解决。只能尝试从cluster中移除该节点信息,之后重新添加的方式,去刷新故障节点的集群id信息。
# 0、设置环境变量
shell>export ETCDCTL_API=3
# 1、移除member etcd1节点
shell> etcdctl --endpoints=http://172.26.238.109:2379 member remove
7ddd2057b06daab9
Member7ddd2057b06daab9 removed from cluster 43b2b807c40ffe0
# 2、停止etcd1服务,服务使用静态pod部署
shell> sudo mv /etc/kubernetes/manifests/etcd.yaml /tmp/
# 3、等待片刻后,清除etcd1节点的数据(之后加入集群会重新同步数据)
shell> sudo rm -rf /data/docker/etcd-pod
# 4、修改etcd.yaml文件中 "--initial-cluster-state=new" 改为
"--initial-cluster-state=existing"
shell> sudo sed -i 's/new/existing/g' /tmp/etcd.yaml
# 5、加入member etcd1节点
shell> etcdctl --endpoints=http://172.26.238.109:2379 member add etcd1 --peer-urls="http://etcd1.default:2380"
Member64c0a38ce18ae4e1 added to cluster 43b2b807c40ffe0
ETCD_NAME="etcd1"
ETCD_INITIAL_CLUSTER="etcd3=http://etcd3.default:2380,etcd2=http://etcd2.default:2380,etcd1=http://etcd1.default:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd1.default:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
# 6.启动etcd1节点
mv
/tmp/etcd.yaml /etc/kubernetes/manifests/
再次查看重新加入etcd1节点后,集群的状态已经正常。
kubewps> etcdctl --endpoints=http://172.26.238.109:2379 member list --write-out=table
+------------------+---------+-------+---------------------------+-------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+---------------------------+-------------------------+------------+
| 3f46dd6c20547bdc | started | etcd3 | http://etcd3.default:2380 | http://172.27.0.3:2379 | false |
| 5e33781cf341b138 | started | etcd2 | http://etcd2.default:2380 | http://172.27.1.4:2379 | false |
| 64c0a38ce18ae4e1 | started | etcd1 | http://etcd1.default:2380 | http://172.27.2.20:2379 | false |
+------------------+---------+-------+---------------------------+-------------------------+------------+
实践二、etcd数据备份和恢复
官网资料https://etcd.io/docs/v3.6/op-guide/recovery/#restoring-a-cluster
对于具有 n 个成员的集群,仲裁为 (n/2)+1,
ETCD 集群高可用允许 (N-1)/2 的节点出现临时性故障,3台etcd集群允许宕机一台;超过(N-1)/2节点,那集群将暂时不可用,直到节点个数恢复到超过 (N-1)/2.
报错recovering backend from snapshot error:database snapshot,可以看出是etcd数据丢失导致节点etcd无法正常启动,基本就可以确定是snap文件损坏,或者遗失了
检查集群状态
[root@localhost ~]# kubectl get cs
Warning: v1 ComponentStatusis deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-2UnhealthyGet"https://10.119.48.169:2379/health": dial tcp 10.119.48.169:2379: connect: connection refused
controller-manager Healthy ok
scheduler Healthy ok
etcd-0Healthy{"health":"true","reason":""}
etcd-1Healthy{"health":"true","reason":""}
export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/kubernetes/cluster1/ssl/ca.pem
export ETCDCTL_CERT=/etc/etcd/ssl/etcd.pem
export ETCDCTL_KEY=/etc/etcd/ssl/etcd-key.pem
[root@localhost ~]# ETCDCTL_API=3 etcdctl --endpoints="https://10.119.48.166:2379,https://10.119.48.168:2379,https://10.119.48.169:2379" endpoint status --write-out=table
{"level":"warn","ts":"2023-07-18T11:18:44.220701+0800","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00002c000/10.119.48.166:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.119.48.169:2379: connect: connection refused\""}
Failed to get the status of endpoint https://10.119.48.169:2379 (context deadline exceeded)
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.119.48.166:2379 | 9b4462a24fce09fa | 3.5.8 | 4.0 MB | true | false | 10 | 970112 | 970112 | |
| https://10.119.48.168:2379 | d0844b5c962b1cf2 | 3.5.8 | 4.0 MB | false | false | 10 | 970112 | 970112 | |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
-
导出etcd数据
从上面检查可以看出etcd2由于数据损坏导致无法启动,从leader节点备份数据作为恢复的源
export ETCDCTL_API=3
export ETCDCTL_CACERT=/etc/kubernetes/cluster1/ssl/ca.pem
export ETCDCTL_CERT=/etc/etcd/ssl/etcd.pem
export ETCDCTL_KEY=/etc/etcd/ssl/etcd-key.pem
[root@localhost ~]# etcdctl --endpoints="https://10.119.48.166:2379" snapshot save /data/snapshot$(date +%Y%m%d).db
{"level":"info","ts":"2023-07-18T11:34:45.205311+0800","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/data/snapshot20230718.db.part"}
{"level":"info","ts":"2023-07-18T11:34:45.214521+0800","logger":"client","caller":"[email protected]/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-07-18T11:34:45.214686+0800","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://10.119.48.166:2379"}
{"level":"info","ts":"2023-07-18T11:34:45.374149+0800","logger":"client","caller":"[email protected]/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-07-18T11:34:45.410687+0800","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://10.119.48.166:2379","size":"4.0 MB","took":"now"}
{"level":"info","ts":"2023-07-18T11:34:45.410837+0800","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/data/snapshot20230718.db"}
Snapshot saved at /data/snapshot20230718.db
已经成功生成了备份文件/data/snapshot20230718.db
-
恢复前准备
-
首先停止k8s集群中的etcd和kube-apiserver服务,防止备份过程中数据继续写入
-
备份etcd原目录,防止恢复失败导致数据彻底丢失
-
恢复数据
将数据拷贝到3个etcd节点上
ansible -i etcd.conf etcd -m copy -a 'src=/data/snapshot20230718.db dest=/tmp/snapshot20230718.db' -u deploy
在3个节点上执行数据库恢复报错
sudo ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot20230718.db --data-dir=/data/kube/etcd --name=etcd1 --cacert=/etc/kubernetes/cluster1/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --initial-cluster-token=etcd-cluster-0 --initial-cluster=etcd1=https://10.119.48.166:2380,etcd2=https://10.119.48.168:2380,etcd3=https://10.119.48.169:2380 --initial-advertise-peer-urls=https://10.119.48.166:2380 #etcd1上执行
sudo ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot20230718.db --data-dir=/data/kube/etcd --name=etcd2 --cacert=/etc/kubernetes/cluster1/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --initial-cluster-token=etcd-cluster-0 --initial-cluster=etcd1=https://10.119.48.166:2380,etcd2=https://10.119.48.168:2380,etcd3=https://10.119.48.169:2380 --initial-advertise-peer-urls=https://10.119.48.168:2380 #etcd2执行
sudo ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot20230718.db --data-dir=/data/kube/etcd --name=etcd3 --cacert=/etc/kubernetes/cluster1/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --initial-cluster-token=etcd-cluster-0 --initial-cluster=etcd1=https://10.119.48.166:2380,etcd2=https://10.119.48.168:2380,etcd3=https://10.119.48.169:2380 --initial-advertise-peer-urls=https://10.119.48.169:2380 #etcd3执行
最后启动etcd和kube-apiserver服务,验证状态正常
[root@localhost kube]# ETCDCTL_API=3 etcdctl --endpoints="https://10.119.48.166:2379,https://10.119.48.168:2379,https://10.119.48.169:2379" endpoint status --write-out=table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.119.48.166:2379 | 9b4462a24fce09fa | 3.5.8 | 4.0 MB | true | false | 2 | 127 | 127 | |
| https://10.119.48.168:2379 | d0844b5c962b1cf2 | 3.5.8 | 4.0 MB | false | false | 2 | 127 | 127 | |
| https://10.119.48.169:2379 | 53ed13bd007bd8a0 | 3.5.8 | 4.0 MB | false | false | 2 | 127 | 127 | |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
实践三、etcd空间满问题
现象:业务报错“Etcdserver: mvcc: database space exceeded”
原因:etcd默认存储空间限制为2G,最大支持8G。达到配额会触发告警,然后 Etcd 系统将进入操作受限的维护模式
The default storage size limit is 2GB, configurable with --quota-backend-bytes flag. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
官方文档:https://etcd.io/docs/v3.5/op-guide/maintenance/
查看当前空间使用
#检查空间使用
$ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
|127.0.0.1:2379| bf9071f4639c75cc |2.3.0+git|18 MB |true|2|3332|
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# 查看告警
$ ETCDCTL_API=3 etcdctl alarm listmemberID:13803658152347727308 alarm:NOSPACE
临时解决方案:压缩并进行碎片清理,然后清清除告警可以临时恢复使用
# get current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9].*')
# compact away all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev
compacted revision 1516
# defragment away excessive space
$ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123
OK
-
彻底解决方案
彻底解决方式为修改修改etcd配额解决
--auto-compaction-retention=1 在一个小时内为mvcc键值存储的自动压实保留。0表示禁用自动压缩
--max-request-bytes=10485760 消息最大字节数,ETCD默认该值为1.5M,etcd版本小于3.2.10不支持此参数
--quota-backend-bytes=4294967296 ETCDdb数据大小,默认是2g,扩容到4g,如果4g不够,可继续增加,最大可以调整到8g
具体如下
重启etcd服务生效
实验四、etcd leader经常切换问题
通过查看etcd集群状态发现etcd在频繁更换leader,etcd有如下日志
类似异常日志
异常日志1
waiting for ReadIndex response took too long
异常日志2
{"level":"warn","msg":"slow fdatasync","took":"2.14025047s","expected-duration":"1s"}
异常日志3
raft.node: 255a2e4092d561fb changed leader from 255a2e4092d561fb to 1de1eaa8fb268f49 at term 3112
可以通过命令多次查看,leader节点是不是经常变化
export ETCDCTL_API=3
etcdctl endpoints='https://10.119.52.70:2379,https://10.119.52.71:2379,https://10.119.52.72:2379 ‘endpoint status --cacert /etc/kubernetes/cluster1/ssl/ca.pem -w table
常见原因:网络延迟和磁盘io导致
解决方法
临时解决可以通过修改参数扩大心跳检测时长避免leader频繁切换
--election-timeout=5000 (默认1000ms)
--heartbeat-interval=500 (默认500ms)
修改/etc/systemd/system/etcd.service(同集群心跳和选举时间要保持一致),添加以上参数
# 重载配置
sudo systemctl daemon-reload
# 重启etcd
sudo systemctl restart etcd.service
彻底解决需要将etcd独立部署,或者检查网络和磁盘,将etcd部署到性能更好的环境上
五、关于etcd常用的监控
通过部署etcd-exporter+Prometheus,然后配置etcd相关告警可以及时发现etcd集群风险
常见监控项目
1. etcd集群无leader
Etcd cluster have no leader
- alert:EtcdNoLeader
expr: etcd_server_has_leader ==0
for:0m
labels:
severity: critical
annotations:
summary:EtcdnoLeader(instance {{ $labels.instance }})
description: "Etcd cluster have no leader\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"```
2. etcd grpc请求慢
GRPC requests slowing down, 99th percentile is over 0.15s
- alert:EtcdGrpcRequestsSlow
expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{grpc_type="unary"}[1m]))by(grpc_service, grpc_method, le))>0.15
for:2m
labels:
severity: warning
annotations:
summary:Etcd GRPC requests slow (instance {{ $labels.instance }})
description: "GRPC requests slowing down, 99th percentile is over 0.15s\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
3. etcd http请求慢
HTTP requests slowing down, 99th percentile is over 0.15s
- alert: EtcdHttpRequestsSlow
expr: histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[1m])) > 0.15
for: 2m
labels:
severity: warning
annotations:
summary: Etcd HTTP requests slow (instance {{ $labels.instance }})
description: "HTTP requests slowing down, 99th percentile is over 0.15s\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
4. Etcd member communication slow
Etcd member communication slowing down, 99th percentile is over 0.15s
- alert: EtcdMemberCommunicationSlow
expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[1m])) > 0.15
for: 2m
labels:
severity: warning
annotations:
summary: Etcd member communication slow (instance {{ $labels.instance }})
description: "Etcd member communication slowing down, 99th percentile is over 0.15s\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
5. Etcd high fsync durations
Etcd WAL fsync duration increasing, 99th percentile is over 0.5s
- alert: EtcdHighFsyncDurations
expr: histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[1m])) > 0.5
for: 2m
labels:
severity: warning
annotations:
summary: Etcd high fsync durations (instance {{ $labels.instance }})
description: "Etcd WAL fsync duration increasing, 99th percentile is over 0.5s\n VALUE = {{ $
标签:一篇,运维,--,etcdctl,10.119,2379,etcd,节点
From: https://www.cnblogs.com/cheyunhua/p/18463713