首页 > 其他分享 >HPA&&metrics-server

HPA&&metrics-server

时间:2023-01-01 22:44:41浏览次数:37  
标签:kind name server metrics && HPA k8s pod

27. HPA

27.1 Pod伸缩简介

根据当前pod的负载,动态调整pod副本数量,业务高峰期自动扩容pod的副本数以尽快响应pod的请求
在业务低峰期对pod进行缩容,实现降本增效的目的
公有云支持node级别的弹性伸缩

27.2 Scale命令扩容

#当--replicas的值设置为比原来的pod数,k8s会杀掉一些pod,下面3个变成1个
root@k8s-master1:~/k8s-data/yaml/dubbo# kubectl get deployments.apps -n demo 
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment         1/1     1            1           6d21h
tomcat-app1-deployment   1/1     1            1           6d22h
                 
root@k8s-master1:~/k8s-data/yaml/dubbo# kubectl scale deployment -n demo nginx-deployment --replicas=1

27.3 动态伸缩控制器类型

  • 水平pod自动缩放器(HPA)
基于pod资源利用率横向调整pod副本数量
  • 垂直pod自动缩放器(VPA)
基于pod资源利用率,调整对单个pod的最大资源限制,不能与HPA同时使用
  • 集群伸缩(CA)
基于集群中node资源使用情况,动态伸缩node节点,从而保证有CPU和内存资源用于创建pod

27.4 HPA控制器简介

# kube-controller-manager --help | grep initial-readiness-delay
Horizontal Pod Autoscaling(HPA)控制器,根据预定义好的阈值及pod当前的资源利用率,自动控制在k8s集群中运行的pod数量(自动弹性水平自动伸缩)
--horizontal-pod-autoscaler-sync-period #默认每隔15s也可以通过这个参数进行修改,查询metrics的资源使用情况
--horizontal-pod-autoscaler-downscale-stabilization #缩容间隔周期,默认5分钟
--horizontal-pod-autoscaler-sync-period #HPA控制器同步pod副本数的间隔周期
--horizontal-pod-autoscaler-cpu-initialization-period #初始化延迟时间,在此时间内pod的CPU资源指标将不会生效,默认为5分钟
--horizontal-pod-autoscaler-initial-readiness-delay #用于设置pod准备时间,在此时间内的pod统统被认为未就绪及不采集数据,默认为30秒
--horizontal-pod-autoscaler-tolerance #HPA控制器能容忍的数据差异(浮点数,默认为0.1)即新的指标要与当前的阈值差异在0.1或以上即要大于1+0.1=1.1, 或小于1-0.1=0.9,比如阈值为CPU利用率50%,当前为80%,那么80/50=1.6 > 1.1则会触发扩容,反之会缩容。
触发条件:avg(CurrentPodsConsumption)/Target >1.1 或 <0.9=把N个pod的数据相加后根据pod的数量计算出平均数除以阈值,大于1.1就扩容,小于0.9就缩容

计算公式:TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization)/Target) #ceil是一个向上取整的目的pod整数

指标数据需要部署metrics-server,即HPA使用metrics-server作为数据源
在k8s 1.1引入HPA控制器,早期使用Heapster组件采集pod指标数据,在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过API,例如:metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io、然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的

27.5 metrics-server部署

metrics-server 是Kubernetes内置的容器资源指标来源
metrics-server 从node节点上的kubelet收集资源指标,并通过Metrics API在 Kubernetes apiserver中公开指标数据,以供Horizontal Pod Autoscaler和Vertical Pod Autoscaler使用,也可以通过访问kubectl top node/pod 查看指标数据

  • YAML
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat metrics-server-v0.6.1.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
#更改了镜像名字
        #image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        image: harbor.nbrhce.com/demo/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

27.6 HPA实现

  • TOMCAT YAML
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: tomcat-app1-deployment-label
  name: tomcat-app1-deployment
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat-app1-selector
  template:
    metadata:
      labels:
        app: tomcat-app1-selector
    spec:
      containers:
      - name: tomcat-app1-container
        #image: harbor.magedu.local/magedu/tomcat-app1:v7
        #image: tomcat:7.0.93-alpine 
#这个是压测的镜像运行两个CPU直接100%好实现目标
        image: lorel/docker-stress-ng 
        args: ["--vm", "2", "--vm-bytes", "256M"]
        ##command: ["/apps/tomcat/bin/run_tomcat.sh"]
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: tomcat-app1-service-label
  name: tomcat-app1-service
  namespace: demo
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: tomcat-app1-selector
  • HPA
#注意关联的是哪个名字哪个控制器
#添加最小副本minReplicas: 3
#添加最大副本maxReplicas: 10 最多就扩容到10
# targetCPUUtilizationPercentage 这个是CPU阈值 也就是触发的机制
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat hpa.yaml 
#apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v1 
kind: HorizontalPodAutoscaler
metadata:
  namespace: demo
  name: tomcat-app1-podautoscaler
  labels:
    app: tomcat-app1
    version: v2beta1
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    #apiVersion: extensions/v1beta1 
    kind: Deployment
    name: tomcat-app1-deployment 
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 60
  #metrics:
  #- type: Resource
  #  resource:
  #    name: cpu
  #    targetAverageUtilization: 60
  #- type: Resource
  #  resource:
  #    name: memory

#这样显示就代表扩容了
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# kubectl get hpa -n demo
NAME                        REFERENCE                           TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
tomcat-app1-podautoscaler   Deployment/tomcat-app1-deployment   199%/60%   3         10        10         15m

标签:kind,name,server,metrics,&&,HPA,k8s,pod
From: https://www.cnblogs.com/yidadasre/p/17019184.html

相关文章

  • HPA 自动水平伸缩 POD
    ​前戏我们知道,初始Pod的数量是可以设置的,同时业务也分流量高峰和低峰,那么怎么即能不过多的占用K8s的资源,又能在服务高峰时自动扩容pod的数量呢,在K8s上的答案是HorizontalP......
  • 阿里云弹性预测 AHPA:助力厨芯科技降本增效
    作者:李鹏(元毅)“使用阿里云弹性预测AHPA,降低了K8s容器成本,同时减轻了运维工作量,加速了业务容器化的进程。”——朱晏(厨芯科技VP)背景厨芯科技,是全球领先的餐饮设备......
  • 6 HPA 控制器简介与实现和RBAC简介及账户授权
    一HPA控制器简介与实现1.1HPA介绍https://github.com/kubernetes-sigs/metrics-server简介计算公式当前cpu利用率除以阈值,在跟当前pod数量进行比较,看是否增加比如......
  • Git Submodules && Sparse checkout
    步骤备忘:1,增加子模块,这里会把所有的内容clone下来,在5步的时候,会清除掉不需要的文件。 gitsubmoduleadd  [email protected]/asdfasdfasdfasdfasdfasdfasdf.git  ......
  • 【云原生】k8s pod 定时弹性伸缩cronhpa介绍与实战操作
    目录一、概述二、cronhpa安装三、测试验证一、概述其实原生的HPA是不支持根据时间点来进行扩缩容的,根据时间点扩缩容其实在有些场景下还是蛮实用的,因为根据资源扩缩容......
  • Kubernetes(K8S) kubectl top (metrics-server) node NotFound
    kubectltop命令安装metrics-servercomponents.yaml网上的各种方法都有问题,找到了一个完整版的yamlapiVersion:v1kind:ServiceAccountmetadata:labels:k8s-app......
  • Prometheus Metrics设计的最佳实践和应用实例
    使用Promethues实现应用监控的一些实践在这篇文章中我们介绍了如何利用Prometheus监控应用。在后续的工作中随着监控的深入,我们结合自己的经验和官方文档总结了一些Met......
  • 第22章:kubernetes弹性伸缩(HPA)
    2弹性伸缩k8s版本v1.202.1传统弹性伸缩的困境从传统意义上,弹性伸缩主要解决的问题是容量规划与实际负载的矛盾。​​​​蓝色水位线表示集群资源容量随着负载的增加不断扩......
  • JS中URLSearchParams的基本用法
    本章将和大家分享JS中URLSearchParams的基本用法。话不多说,下面我们直接来看代码。一、JS中URLSearchParams的基本用法<!DOCTYPEhtml><htmllang="en"><head><......
  • HPA-自动弹性缩放
    Deployment、ReplicaSet、ReplicationController或StatefulSet控制器资源管控Pod副本数量支持手动方式的运行时调整,从而更好地匹配业务规模的实际需求。不过,手动调整......