Horizontal Pod Autoscaler(HPA) 控制器介绍
在前面的学习中,我们已经可以实现通过手工执行 kubectl scale 命令实现 pod 扩容或缩容,但是这显示不符合 kubernetes 的定位目标--自动化、智能化。kubernetes 期望可以实现通过监测 pod 的使用情况,实现 pod 数量的自动调整,于是就产生了 Horizontal Pod AutoScaler(HPA)这种控制器
HPA 可以获取每个 Pod 的利用率,然后和 HPA 中定义的指标进行对比,同时计算出需要伸缩的具体值,最后实现 pod 数量的调整,其实 HPA 于之前的 Deployment 一样,也属于 kubernetes 资源对象,它通过追踪分析 RC 控制的所有目标 Pod 的负载变化情况,来确定是否需要针对性的调整目标 pod 的副本数,这是 HPA 的实现原理。
Pod 水平自动扩缩(Horizontal Pod Autoscaler) 可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 数量。 除了 CPU 利用率,也可以基于其他应程序提供的自定义度量指标来执行自动扩缩。 Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet
HPA Controller 会通过调整副本数量使得 CPU 使用率尽量向期望值靠近,而且不是完全相等.另外,官方考虑到自动扩展的决策可能需要一段时间才会生效:例如当 pod 所需要的 CPU 负荷过大,从而在创建一个新 pod 的过程中,系统的 CPU 使用量可能会同样在有一个攀升的过程。所以,在每一次作出决策后的一段时间内,将不再进行扩展决策。对于扩容而言,这个时间段为 3 分钟,缩容为 5 分钟(可以通过 --horizontal-pod-autoscaler-downscale-delay, --horizontal-pod-autoscaler-upscale-delay 进行调整)
HPA 的 API 有三个版本,通过 kubectl api-versions | grep autoscal 可看到
[root@dce-10-6-215-215 ~]# kubectl api-versions | grep autoscal
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
查看使用的版本:
[root@dce-10-6-215-215 ~]# kubectl explain hpa
KIND: HorizontalPodAutoscaler
VERSION: autoscaling/v1 # 可以看到目前使用的是这个版本
DESCRIPTION:
configuration of a horizontal pod autoscaler.
FIELDS:
apiVersion <string>
......
查看其他指定的版本
# 查看 autoscaling/v2beta1 的版本
[root@dce-10-6-215-215 ~]# kubectl explain hpa --api-version=autoscaling/v2beta1
KIND: HorizontalPodAutoscaler
VERSION: autoscaling/v2beta1
DESCRIPTION:
HorizontalPodAutoscaler is the configuration for a horizontal pod
autoscaler, which automatically manages the replica count of any resource
implementing the scale subresource based on the metrics specified.
FIELDS:
apiVersion <string>
三个版本之前的区别如下
- autoscaling/v1:只支持基于CPU指标的缩放;
- autoscaling/v2beta1:支持 Resource Metrics(资源指标,如 pod 内存)和 Custom Metrics(自定义指标)的缩放;
- autoscaling/v2beta2:支持 Resource Metrics(资源指标,如 pod 的内存)和 Custom Metrics(自定义指标)和 ExternalMetrics
手动扩缩容
pc-deployment.yaml 文件内容如下
查看代码
apiVersion: apps/v1
kind: Deployment # 类型为 deployment
metadata:
name: pc-deployment # deployment 的名称
namespace: test
spec:
replicas: 4 # 副本数为 3
selector: # 选择器,和 template 的对应
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.14
现在我们是有一个 deployment 和三个 pod,如下
[root@dce-10-6-215-215 ~]# kubectl get deploy,pod -n test
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pc-deployment 3/3 3 3 15s
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-5db6b86685-2b8rs 1/1 Running 0 15s
pod/pc-deployment-5db6b86685-2mv2n 1/1 Running 0 15s
pod/pc-deployment-5db6b86685-5gnjx 1/1 Running 0 15s
第一种方式:我们可以更改 yaml 文件,vim 编辑修改 yaml 文件,改好副本数保存后,在 apply 一下,我这里把副本数改为了 4
# 修改副本数,改为 4
[root@dce-10-6-215-215 ~]# vim pc-deployment.yaml
# 修改完文件后重新配置一下
[root@dce-10-6-215-215 ~]# kubectl apply -f pc-deployment.yaml
deployment.apps/pc-deployment configured
# 查看发现有 1 个 pod 正在创建
[root@dce-10-6-215-215 tmp]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
pc-deployment-5db6b86685-2b8rs 1/1 Running 0 3m45s
pc-deployment-5db6b86685-2mv2n 1/1 Running 0 3m45s
pc-deployment-5db6b86685-5gnjx 1/1 Running 0 3m45s
pc-deployment-5db6b86685-x6t4p 1/1 ContainerCreating 0 14s
# 4 个 pod 都已经运行了
[root@dce-10-6-215-215 ~]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
pc-deployment-5db6b86685-2b8rs 1/1 Running 0 3m47s
pc-deployment-5db6b86685-2mv2n 1/1 Running 0 3m47s
pc-deployment-5db6b86685-5gnjx 1/1 Running 0 3m47s
pc-deployment-5db6b86685-x6t4p 1/1 Running 0 16s
第二种方式:可以通过编辑 deployment 的副本数量,修改 spec:replicas: 5 即可
# 通过 edit 实现,改完文件保存之后就会自动生效
[root@dce-10-6-215-215 ~]# kubectl edit deploy pc-deployment -n test
deployment.apps/pc-deployment edited
# 查看 pod 的数量,变成了 6 个 pod
[root@dce-10-6-215-215 ~]# kubectl get pod,deploy -n test
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-5db6b86685-2b8rs 1/1 Running 0 8m50s
pod/pc-deployment-5db6b86685-2mv2n 1/1 Running 0 8m50s
pod/pc-deployment-5db6b86685-5gnjx 1/1 Running 0 8m50s
pod/pc-deployment-5db6b86685-hn7zz 1/1 Running 0 64s
pod/pc-deployment-5db6b86685-x6t4p 1/1 Running 0 5m19s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pc-deployment 5/5 5 5 8m50s
第三种方式:使用命令的方式
# 变更副本数为 6 个,注意,扩缩的时候写的是控制器的名称,不是 pod
[root@dce-10-6-215-215 ~]# kubectl scale deploy pc-deployment --replicas=6 -n test
deployment.apps/pc-deployment scaled
# 查看 deployment
[root@dce-10-6-215-215 ~]# kubectl get deploy pc-deployment -n test
NAME READY UP-TO-DATE AVAILABLE AGE
pc-deployment 6/6 6 6 15m
# 查看 pod
[root@dce-10-6-215-215 ~]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
pc-deployment-5db6b86685-2b8rs 1/1 Running 0 15m
pc-deployment-5db6b86685-2mv2n 1/1 Running 0 15m
pc-deployment-5db6b86685-5gnjx 1/1 Running 0 15m
pc-deployment-5db6b86685-dzrxj 1/1 Running 0 2m48s
pc-deployment-5db6b86685-hn7zz 1/1 Running 0 8m7s
pc-deployment-5db6b86685-x6t4p 1/1 Running 0 12m
HPA 实现自动扩缩容
在使用 HPA 的时候需要先部署一下 metrics-server,收集集群资源利用率
官网:https://github.com/kubernetes-sigs/metrics-server/releases
你也可以通过下面的命令进行安装
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
安装完成之后等会执行 kubectl top node,可以看到下面的输出就没问题
[root@dce-10-6-215-215 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
dce-10-6-215-190 464m 6% 2302Mi 15%
dce-10-6-215-200 620m 4% 2397Mi 7%
dce-10-6-215-215 3305m 44% 10574Mi 69%
目前是有 6 个 pod,如下
[root@dce-10-6-215-215 ~]# kubectl get pod,deploy -n test
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-5db6b86685-2b8rs 1/1 Running 0 105m
pod/pc-deployment-5db6b86685-2mv2n 1/1 Running 0 105m
pod/pc-deployment-5db6b86685-5gnjx 1/1 Running 0 105m
pod/pc-deployment-5db6b86685-dzrxj 1/1 Running 0 92m
pod/pc-deployment-5db6b86685-hn7zz 1/1 Running 0 98m
pod/pc-deployment-5db6b86685-x6t4p 1/1 Running 0 102m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pc-deployment 6/6 6 6 105m
创建一个 deploy-hpa.yaml 文件,如下
apiVersion: autoscaling/v1 # 版本为 autoscaling/v1
kind: HorizontalPodAutoscaler # 类型为 HPA
metadata:
namespace: test # namespace
name: nginx-app1-podautoscaler # HPA 的名称
labels:
app: nginx-pod # 选择 app=nginx-pod 的 pod
spec:
scaleTargetRef: # 要伸缩的目标资源,这里为 deployment
apiVersion: apps/v1 # 伸缩类型的版本,这里为 deployment,版本为 apps/v1
kind: Deployment # 扩缩容的对象是 deployment
name: pc-deployment # deployment 的名称
minReplicas: 2 # 最小副本数
maxReplicas: 20 # 最大副本数
targetCPUUtilizationPercentage: 20 # 定义检测的 CPU 使用率指标的阈值,这里为 20,当小于 20% 的时候就会缩容,大于的时候就会扩容
在来创建一下 HPA 控制器
[root@dce-10-6-215-215 ~]# kubectl apply -f deploy-hpa.yaml
horizontalpodautoscaler.autoscaling/nginx-app1-podautoscaler created
查看 HPA 控制器
# TARGETS / 左边的为 cpu 使用率,右边的为我们设置的阈值
# MINPODS 最小的 pod 数,MAXPODS 最大的 pod 数,当前的副本数 REPLICAS
[root@dce-10-6-215-215 ~]# kubectl get hpa -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-app1-podautoscaler Deployment/pc-deployment 0%/20% 2 20 6 30s
查看 HPA 的详细信息
[root@dce-10-6-215-215 ~]# kubectl describe hpa nginx-app1-podautoscaler -n test
Name: nginx-app1-podautoscaler
Namespace: test
Labels: app=nginx-pod
Annotations: CreationTimestamp: Sun, 10 Jul 2022 12:40:01 +0800
Reference: Deployment/pc-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 20%
Min replicas: 2
Max replicas: 20
Deployment pods: 6 current / 6 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
等 5 分钟后在去查看 pod 和 hpa
# 查看 HPA
[root@dce-10-6-215-215 ~]# kubectl get hpa -n test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-app1-podautoscaler Deployment/pc-deployment 0%/20% 2 20 2 8m22s
# 查看 pod,发现只有 2 个 pod 了,这是因为 cpu 内存占用一直很小,所以 HPA 将 pod 数量改为了 2 ,因为我们写的最小的 pod 数量为 2
[root@dce-10-6-215-215 ~]# kubectl get pod,deploy -n test
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-5db6b86685-2b8rs 1/1 Running 0 117m
pod/pc-deployment-5db6b86685-2mv2n 1/1 Running 0 117m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pc-deployment 2/2 2 2 117m
在来查看 HPA 的详情
[root@dce-10-6-215-215 ~]# kubectl describe hpa nginx-app1-podautoscaler -n test
Name: nginx-app1-podautoscaler
Namespace: test
Labels: app=nginx-pod
Annotations: CreationTimestamp: Sun, 10 Jul 2022 12:40:01 +0800
Reference: Deployment/pc-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 20%
Min replicas: 2
Max replicas: 20
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
# 可以看到这里进行了缩容
Normal SuccessfulRescale 11m horizontal-pod-autoscaler New size: 2; reason: All metrics below target
基于 cpu 和 内存的限制
apiVersion: autoscaling/v2beta1 # 版本为 autoscaling/v2beta1,autoscaling/v1 只有 cpu
kind: HorizontalPodAutoscaler # 类型为 HPA
metadata:
namespace: test # namespace
name: nginx-app1-podautoscaler # HPA 的名称
labels:
app: nginx-pod # 选择 app=nginx-pod 的 pod
spec:
scaleTargetRef: # 要伸缩的目标资源,这里为 deployment
apiVersion: apps/v1 # 伸缩类型的版本,这里为 deployment,版本为 apps/v1
kind: Deployment # 扩缩容的对象是 deployment
name: pc-deployment # deployment 的名称
minReplicas: 2 # 最小副本数
maxReplicas: 20 # 最大副本数
metrics:
- type: Resource
resource:
name: cpu # 限制 cpu
targetAverageUtilization: 80 # 阈值为 80
- type: Resource
resource:
name: memory # 限制内存
targetAverageValue: 30Mi # 阈值为 30m
如上,设置了 pc-deployment 的 deployment 控制的 pod 的 HPA 限制,当 cpu 使用超过设置的 80%,内存使用超过 30Mi 时就触发自动扩容,副本数最小为 2,最大为20。
删除 hpa 控制器
[root@dce-10-6-215-215 ~]# kubectl delete -f deploy-hpa.yaml
horizontalpodautoscaler.autoscaling "nginx-app1-podautoscaler" deleted
标签:5db6b86685,215,--,pc,Running,deployment,HPA,k8s,pod From: https://www.cnblogs.com/zouzou-busy/p/16154190.html