首页 > 其他分享 >k8s-hpa

k8s-hpa

时间:2023-03-28 10:12:13浏览次数:43  
标签:kubectl -- magedu metrics nginx deployment hpa k8s

kubectl scale 对运行在k8s 环境中的pod 数量进行扩容(增加)或缩容(减小)。
HPA:(Horizontal Pod Autoscaler)Pod自动弹性伸缩,K8S通过对Pod中运行的容器各项指标(CPU占用、内存占用、网络请求量)的检测,实现对Pod实例个数的动态新增和减少。

一、手动调整Pod数量的方式:
    1.改yaml文件改replicas数量
    2.在dashboard改deployment的pod值
    3.通过kubectl scale命令(临时): 
        kubectl  scale deployment linux39-tomcat-app1-deployment --replicas=3 -n linux39
        kubectl delete hpa   xx   -n xx
    4.通过kubectl edit命令(临时): kubectl  edit deployment linux39-tomcat-app1-deployment -n linux39

实例:             
[root@localhost7C ]# kubectl run net-test1 --image=alpine --replicas=4 sleep 360000     

[root@localhost7C ]# kubectl  scale   deployment  --replicas=3 net-test1 -n default

[root@localhost7C ]#kubectl get deployment -n default net-test1 
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
net-test1   3/3     3            3           57m


#scale不支持下面三个命令查看,autoscale支持。
[root@localhost7C ]# kubectl  get hpa net-test1  -n default 
[root@localhost7C ]# kubectl  describe  hpa net-test1   -n default 
[root@localhost7C ]# kubectl  delete  hpa net-test1   -n default
    


二、HPA自动伸缩
1.HPA控制器简介及实现
k8s从1.1版本开始增加了名称为HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于pod中资源(CPU/Memory)利用率进行对pod的自动扩缩容功能的实现,
早期的版本只能基于Heapster组件实现对CPU利用率做为触发条件,但是在k8s 1.11版本开始使用Metrices Server完成数据采集,
然后将采集到的数据通过API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,
然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。

控制管理器默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用情况
支持以下三种metrics指标类型:
    ->预定义metrics(比如Pod的CPU)以利用率的方式计算
    ->自定义的Pod metrics,以原始值(raw value)的方式计算
    ->自定义的object metrics
支持两种metrics查询方式:
    ->Heapster
    ->自定义的REST API
支持多metrics


        
1.通过命令配置扩缩容
        kubectl autoscale deployment linux39-tomcat-app1-deployment  --min=2 --max=5 --cpu-percent=50 -n linux39
2.yaml文件中定义扩缩容(kind: HorizontalPodAutoscaler),配置说明:    
apiVersion: autoscaling/v2beta1           #定义API版本
kind: HorizontalPodAutoscaler               #对象类型
metadata:                                   #定义对象元数据
  namespace: linux36                       #创建后隶属的namespace
  name: linux36-tomcat-app1-podautoscaler #hpa名称
  labels:                                   #label标签
    app: linux36-tomcat-app1               #hpa的label名称
    version: v2beta1                       #hpa的api版本
spec:                                       #定义对象具体信息
  scaleTargetRef:                           #定义水平伸缩的目标对象,Deployment、ReplicationController/ReplicaSet
    apiVersion: apps/v1                    #API版本,HorizontalPodAutoscaler.spec.scaleTargetRef.apiVersion
    kind: Deployment                       #目标对象类型为deployment(重点)
    name: linux36-tomcat-app1-deployment  #deployment 的具体名称(重点)
  minReplicas: 2                           #最小pod数
  maxReplicas: 5                           #最大pod数
  targetCPUUtilizationPercentage: 30      #设置CPU使用率警戒线百分比,指定对应pod的cpu资源使用率达到30%就触发hpa。
  #metrics:                               #调用metrics数据定义
  #- type: Resource                       #类型为资源
  #  resource:                               #定义资源
  #    name: cpu                           #资源名称为cpu
  #    targetAverageUtilization: 80       #CPU使用率
  #- type: Resource                       #类型为资源
  #  resource:                               #定义资源
  #    name: memory                       #资源名称为memory
  #    targetAverageValue: 1024Mi           #memory使用率        

经验说明:
如果HPA的最小值高于业务yaml文件replicas的值,以hpa中的min的值为准
如果HPA的最小值低于业务yaml文件replicas的值,以hpa中的min的值为准
无状态应用可以hpa,有状态的应用不使用hpa




HPA自动伸缩实例:

1.clone代码:
git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/

2.:准备image:
测试系统自带的指标数据:
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods


3.测试指标数据:
# kubectl top
# kubectl top nodes #报错如下
Error from server (NotFound): the server could not find the requested resource (get
services http:heapster:)

解决方案:
docker pull k8s.gcr.io/metrics-server-amd64:v0.3.5 #google镜像仓库
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 #阿里云镜像仓库
或者
docker tag k8s.gcr.io/metrics-server-amd64:v0.3.5 harbor.zzhz.com/baseimages/metrics-server-amd64:v0.3.5
docker push harbor.zzhz.com/baseimages/metrics-server-amd64:v0.3.5



4.(可选)修改controller-manager启动参数,重启controller-manager
[root@localhost7C ~]# kube-controller-manager  --help | grep horizontal-pod-autoscaler

[root@localhost7C ~]# vim /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.20.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --node-cidr-mask-size=24 \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-cluster-ip-range=10.10.0.0/16 \
  --use-service-account-credentials=true \            #是否使用其他客户端数据
  --horizontal-pod-autoscaler-sync-period=10s \     #可选项目,定义数据采集周期间隔时间
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target



5.修改yaml文件
cd   metrics-server/deploy/kubernetes
[root@localhost7C ~]# ll  k8s/metrics/
    aggregated-metrics-reader.yaml
    auth-delegator.yaml
    auth-reader.yaml
    metrics-apiservice.yaml
    metrics-server-deployment.yaml
    metrics-server-service.yaml
    resource-reader.yaml
    
[root@localhost7C ]#  vim metrics-server-deployment.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6   #镜像
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"



6.部署和查看资源使用情况
[root@localhost7C kubernetes]# kubectl  apply  -f ./


[root@localhost7C kubernetes]# kubectl top  node
NAME             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
192.168.80.120   107m         2%     1140Mi          101%      
192.168.80.130   137m         3%     1098Mi          98%       
192.168.80.140   108m         2%     1200Mi          84%       
192.168.80.150   69m          1%     816Mi           64%       
192.168.80.160   65m          1%     844Mi           53%       
192.168.80.170   59m          1%     752Mi           43%       


[root@localhost7C kubernetes]# kubectl top  pod  -A
NAMESPACE     NAME                             CPU(cores)   MEMORY(bytes)             
kube-system   kube-dns-69979c4b84-2h6d2        3m           31Mi            
kube-system   kube-flannel-ds-amd64-2262m      3m           17Mi            
kube-system   kube-flannel-ds-amd64-69qjr      3m           15Mi            
kube-system   kube-flannel-ds-amd64-6bsnm      1m           11Mi            
kube-system   kube-flannel-ds-amd64-6cq5q      2m           11Mi            
kube-system   kube-flannel-ds-amd64-ckmzs      2m           14Mi            
kube-system   kube-flannel-ds-amd64-xddjr      3m           11Mi            
kube-system   metrics-server-ccccb9bb6-m6pws   2m           16Mi      

指标数据:
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods

7.创建示例
[root@localhost7C kubernetes]# cat nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-nginx-deployment-label
  name: magedu-nginx-deployment
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: magedu-nginx-selector
  template:
    metadata:
      labels:
        app: magedu-nginx-selector
    spec:
      containers:
      - name: magedu-nginx-container
        image: harbor.zzhz.com/baseimage/nginx:latest
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        resources:
          limits:
            cpu: '1'
            memory: 200Mi
          requests:
            cpu: '1'
            memory: 200Mi
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-nginx-service-label
  name: magedu-nginx-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30080
  selector:
    app: magedu-nginx-selector


#方式一:通过命令配置扩缩容:
#--cpu-percent=1    指定对应pod的cpu资源使用率达到50%就触发hpa
#等待一会,可以看到相关的hpa信息(K8s上metrics服务收集所有pod资源的时间间隔大概在60s的时间)
kubectl  autoscale  deployment  magedu-nginx-deployment  --max=5 --min=2   --cpu-percent=5  -n default 

验证信息: 
[root@localhost7C kubernetes]# kubectl   get hpa  magedu-nginx-deployment  -n default
NAME                      REFERENCE                            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
magedu-nginx-deployment   Deployment/magedu-nginx-deployment   0%/5%     2         5         3          3m47s

#查看详细信息
[root@localhost7C kubernetes]# kubectl  describe   hpa  magedu-nginx-deployment   -n default
Name:                                                  magedu-nginx-deployment
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 27 Mar 2023 14:20:23 +0800
Reference:                                             Deployment/magedu-nginx-deployment
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  0% (0) / 5%
Min replicas:                                          2
Max replicas:                                          5
Deployment pods:                                       2 current / 2 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  True    TooFewReplicas    the desired replica count is less than the minimum replica count
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  69s   horizontal-pod-autoscaler  New size: 2; reason: All metrics below target

验证信息:变成二个pod. 
[root@localhost7C kubernetes]# kubectl   get hpa  magedu-nginx-deployment  -n default
NAME                      REFERENCE                            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
magedu-nginx-deployment   Deployment/magedu-nginx-deployment   0%/5%     2         5         2          7m55s


#删除hpa
[root@localhost7C kubernetes]# kubectl  delete  hpa magedu-nginx-deployment  -n default



#方式二:yaml文件中定义扩缩容配置(kind: HorizontalPodAutoscaler)
[root@localhost7C kubernetes]# cat hpa.yaml 
#apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v1 
kind: HorizontalPodAutoscaler
metadata:
  namespace: default
  name: magedu-nginx-podautoscaler
  labels:
    app: magedu-nginx
    version: v2beta1
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    #apiVersion: extensions/v1beta1 
    kind: Deployment               #类型    
    name: magedu-nginx-deployment  #nginx-deployment名
  minReplicas: 3
  maxReplicas: 5
  targetCPUUtilizationPercentage: 4



[root@localhost7C kubernetes]# kubectl  apply   -f hpa.yaml 
horizontalpodautoscaler.autoscaling/magedu-nginx-podautoscaler created
                    
[root@localhost7C kubernetes]# kubectl  describe   hpa  magedu-nginx-podautoscaler 
Name:                                                  magedu-nginx-podautoscaler
Namespace:                                             default
Labels:                                                app=magedu-nginx
                                                       version=v2beta1
Annotations:                                           kubectl.kubernetes.io/last-applied-configuration:
                                                         {"apiVersion":"autoscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"app":"magedu-nginx","version":"v2b...
CreationTimestamp:                                     Mon, 27 Mar 2023 14:31:43 +0800
Reference:                                             Deployment/magedu-nginx-deployment
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  <unknown> / 4%
Min replicas:                                          3
Max replicas:                                          5
Deployment pods:                                       2 current / 3 desired
Conditions:
  Type         Status  Reason            Message
  ----         ------  ------            -------
  AbleToScale  True    SucceededRescale  the HPA controller was able to update the target scale to 3
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  11s   horizontal-pod-autoscaler  New size: 3; reason: Current number of replicas below Spec.MinReplicas



[root@localhost7C kubernetes]# kubectl   get hpa  magedu-nginx-podautoscaler 
NAME                         REFERENCE                            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
magedu-nginx-podautoscaler   Deployment/magedu-nginx-deployment   0%/4%     3         5         3          88s



    
参考文档
https://www.cnblogs.com/qiuhom-1874/p/14293237.html

 

标签:kubectl,--,magedu,metrics,nginx,deployment,hpa,k8s
From: https://www.cnblogs.com/Yuanbangchen/p/17264018.html

相关文章

  • k8s-pv/pvc
    默认情况下容器中的磁盘文件是非持久化的,对于运行在容器中的应用来说面临两个问题,第一:当容器挂掉kubelet将重启启动它时,文件将会丢失;第二:当Pod中同时运行多个容器,容器之......
  • 云原生周刊:K8s 在 v1.27 中移除的特性和主要变更
    文章推荐K8s在v1.27中移除的特性和主要变更随着Kubernetes发展和成熟,为了此项目的整体健康,某些特性可能会被弃用、移除或替换为优化过的特性。基于目前在v1.27发......
  • k8s安装与部署
      设置主机名,注意不用使用下划线:hostnamectlset-hostnamek8s-master 初始化:kubeadminit\--apiserver-advertise-address=172.16.17.14\--image-rep......
  • k8s自动升级脚本
    该脚本实现的功能是通过输入(服务名称)以及(版本号)匹配开发给的jar包,放到指定的服务目录下后通过Dockerfile打包镜像,然后自动修改yaml文件版本号进行更新升级。弊端是在服务名......
  • Horizontal Pod Autoscaler(HPA)
    目录环境接口类型创建HPAdeployment创建HPA对象模拟触发自动扩缩容环境必须安装metrics-server或其他自定义metrics-server必须配置requests参数不能扩容无法缩放的......
  • K8S学习笔记之卸载K8S集群
    阅读目录0x00概述0x01 操作0x00概述有时候需要卸载已安装在本机的K8S服务和服务,本文卸载的K8S面向使用kubeadm或者二进制方法安装的,不涉及使用rpm包安装的集群;......
  • 搜索面板和过滤数据(SearchPanel)
    搜索面板和过滤数据(SearchPanel)行政2023年3月2日约3分钟DBGridEh可以显示一个特殊的面板来搜索和过滤网格中的数据。在搜索模式下,网格在所有网格单元格中以......
  • K8s CrashLoopBackOff 如何排障?
    什么是CrashLoopBackOff CrashLoopBackOff是在k8s中较常见的一种Pod异常状态,最直接的表述,集群中的Pod在不断的重启挂掉,一直循环,往往Pod运行几秒钟因为程序异......
  • k8s集群安装nginx-ingress报错解决
    可能是因为之前集群内安装过nginx-ingress,没有删除彻底,再次安装nginx-ingress的时候就提示有资源存在。报错如下:Error:INSTALLATIONFAILED:renderedmanifestscontaina......
  • k8s--etcd 租约
    介绍授予租约可以为etcd集群里面的键授予租约。当键被附加到租约时,它的存活时间被绑定到租约的存活时间,而租约的存活时间相应的被time-to-live(TTL)管理。在租约授予时......