首页 > 其他分享 >KEDA — Kubernetes Based Event Driven Auto scaling(转载)

KEDA — Kubernetes Based Event Driven Auto scaling(转载)

时间:2023-06-30 14:34:07浏览次数:54  
标签:Based Kubernetes queues RabbitMQ event scaling ScaledObject KEDA

原文:https://itnext.io/keda-kubernetes-based-event-driven-autoscaling-48491c79ec74

 

 

Event-driven computing is hardly a new idea; people in the database world have used database triggers for years. The concept is simple: whenever you add, change, or delete data, an event is triggered to perform a variety of functions. What’s new is the proliferation of these types of events and triggers among applications in other areas like Auto scaling, Auto-remediation, Capacity Planning etc. At its core, event-driven architecture is about reacting to various events on your systems and acting accordingly.

Autoscaling (in one or other way a kind of automation) has become an integral component in almost all the cloud platforms, microservices aka containers are not an exemption. In-fact containers known for flexible and decoupled design can best fit in for auto-scaling as they are much easier to create than virtual machines.

Why Autoscaling????

Capacity Scaling — Autoscaling

Scalability is one of the most important aspects to consider for modern container based application deployments. With the advancement of container orchestration platforms, it has never been easier to design solutions for scalability. Kubernetes-based event-driven autoscaling, or KEDA (built with Operator Framework), as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed. KEDA enables a container to consume events directly from the source, instead of routing through HTTP.

KEDA works in any public or private cloud and on-premises, including Azure Kubernetes Service and Red Hat’s OpenShift. With this, developers can also now take Azure Functions, Microsoft’s serverless platform, and deploy it as a container in Kubernetes clusters, including on OpenShift.

This might look simple but assume a busy day with massive transactions and will it be really possible to manage the number of applications (Kubernetes Deployments) manually as shown below???

Managing Autoscaling at Production

KEDA will automatically detect new deployments and start monitoring event sources, leveraging real-time metrics to drive scaling decisions.

KEDA

KEDA as a component on Kubernetes provides two key roles:

  1. Scaling Agent: Agent to activate and deactivate a deployment to scale to configured replicas and scale back replicas to zero on no events.
  2. Kubernetes Metrics Server: A metrics server exposing multitude of event related data like queue length or stream lag which allows event based scaling consuming specific type of event data.

Kubernetes Metrics Server communicates with Kubernetes HPA (horizontal pod autoscaler) to drive the scale out of Kubernetes deployment replicas. It is up to the deployment to then consume the events directly from the source. This preserves rich event integration and enables gestures like completing or abandoning queue messages to work out of the box.

KEDA Architecture

Scaler

KEDA uses a “Scaler” to detect if a deployment should be activated or deactivated (scaling) which in-turn is fed into a specific event source. Today supports multiple “Scalers” with specific supported triggers like Kafka (trigger: Kafka Topics), RabbitMQ (trigger: RabbitMQ Queues) and have many more to come.

Apart from these KEDA integrates with Azure Functions tooling natively extending Azure specific scalers like Azure Storage Queues, Azure Service Bus Queues, Azure Service Bus Topics.

ScaledObject

ScaledObject is deployed as a Kubernetes CRD (Custom Resource Definition) which brings the functionality of syncing a deployment with an event source.

ScaledObject Custom Resource Definition

Once deployed as CRD the ScaledObject can take configuration as below:

ScaledObject Spec

As mentioned above different triggers are supported and some of the examples are shown below:

Trigger Configuration for ScaledObject

Event Driven Autoscaling in Action — On-Premises Kubernetes Cluster

KEDA as Deployment on Kubernetes

KEDA Controller — Kubernetes Deployment

RabbitMQ Queues Scaler with KEDA

RabbitMQ is a message-queueing software called a message broker or queue manager. Simply said; It is a software where queues can be defined, applications may connect to the queue and transfer a message onto it.

RabbitMQ Architecture

In the example below a RabbitMQ Server/Publisher is deployed as a ‘statefulset’ on Kubernetes:

RabbitMQ

A RabbitMQ consumer is deployed as a deployment which accepts queues generated by the RabbitMQ server and simulates performing.

RabbitMQ Consumer Deployment

Creating ScaledObject with RabbitMQ Triggers

Along with the deployment above, a ScaledObject configuration is provided which will be translated by the KEDA CRD created above with installation of KEDA on Kubernetes.

ScaledObject Configuration with RabbitMQ Trigger ScaledObject on Kubernetes

Once the ScaledObject is created, the KEDA controller automatically syncs the configuration and starts watching the rabbitmq-consumer created above. KEDA seamlessly creates a HPA (Horizontal Pod Autoscaler) object with required configuration and scales out the replicas based on the trigger-rule provided through ScaledObject (in this case it is queue length of ‘5’). As there are no queues yet the rabbitmq-consumer deployment replicas are set to zero as shown below.

KEDA Controller on Kubernetes Horizontal Pod Autoscaler created by KEDA RabbitMQ Consumer — Replica:0

With the ScaledObject and HPA configuration above KEDA will drive the container to scale out according to the information received from event source. Publishing some queues with ‘Kubernetes-Job’ configuration below which produces 10 queues:

Kubernetes-Job to publish Queues

KEDA automatically scales the ‘rabbitmq-consumer’ currently set to ‘zero’ replicas to ‘two’ replicas to cater the queues.

Publishing 10 queues — RabbitMQ Consumer scaled to two replicas:

10 queues — 2 Replicas Scale to : 2 — Scale down: 0

Publishing 200 queues — RabbitMQ Consumer scaled to forty (40) replicas:

200 queues — 40 Replicas Scale to : 40 — Scale down: 0

Publishing 1000 queues — RabbitMQ Consumer scaled to 100 replicas as maximum replicas set to 100:

1000 queues — 100 Replicas Scale to : 100 — Scale down: 0  

KEDA provides a FaaS-like model of event-aware scaling, where Kubernetes deployments can dynamically scale to and from zero based on demand and based on intelligence without loosing the data and context. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters, bringing the Azure Functions programming model and scale controller to any Kubernetes implementation, both in the cloud or on-premises.

KEDA also brings more event sources to Kubernetes. KEDA has a great potential to become a necessity in production grade Kubernetes deployments as more triggers continue to be added in the future or providing framework for application developers to design triggers based on the nature of the application making autoscaling as an embedded component in application development.

标签:Based,Kubernetes,queues,RabbitMQ,event,scaling,ScaledObject,KEDA
From: https://www.cnblogs.com/panpanwelcome/p/17516719.html

相关文章

  • kubernetes安装实战->稳定版本v1.14.3
    kubernetes安装方式有很多种,这里kubeadm方式安装,一主两从形式部署。1、集群信息a、集群节点规划主机名   节点ip    角色  部署组件k8s-master192.168.1.203masteretcd、proxy、apiserver、controller-manage、scheduler、coredns、pausek8s-node1 192.16......
  • Kubernetes编程——client-go基础—— 深入 API Machinery —— REST 映射
    深入APIMachinery——REST映射 GVK与GVR之间的映射关系被称为REST映射。我理解意思是说:在Kubernetes中,RESTMapping(REST映射)用于将GroupVersionKind(GVK)与GroupVersionResource(GVR)之间建立映射关系。......
  • Kubernetes 对象以及部署nginx服务示例(四)
    什么是Kubernetes对象?在k8s中管理员与平台交互的最重要方式之一就是创建和管理Kubernetes对象,对象有助于帮助用户部署应用程序和维护集群。理解Kubernetes对象的另一种方法是将它们视为类实例。每个创建的对象都引用一个预定义的类,该类告诉apiserver如何处理系统资源并......
  • Kubernetes编程——client-go基础—— 深入 API Machinery —— Kind
    深入APIMachinery——Kind 在Kubernetes中,APIMachinery是一个核心的软件库,用于构建Kubernetes的API服务器和控制器。它提供了一些基本的功能,如对象存储、认证鉴权、API请求处理和验证等。 在APIMachinery中,Kind是一个重要的概念。在Kubernetes中,每个资源......
  • Kubernetes编程——client-go基础—— 工作队列(workqueue)
    工作队列(workqueue[wɜːk][kjuː])https://github.com/kubernetes/kubernetes/tree/release-1.27/staging/src/k8s.io/client-go/util/workqueue我理解意思是说:这里说的"工作队列"指的一个数据结构。用户可以按照队列所预定义的顺序向这个队列中添加和取出......
  • Kubernetes(k8s) Web-UI界面(一):部署和访问仪表板(Dashboard)
    目录一.系统环境二.前言三.仪表板(Dashboard)简介四.部署Kubernetes仪表板(Dashboard)五.访问Kubernetes仪表板(Dashboard)5.1使用token登录Dashboard5.2对sa账号kubernetes-dashboard授权5.3访问Dashboard六.总结七.附加信息一.系统环境本文主要基于Kubernetes1.21.9和Linux操作......
  • Kubernetes编程——client-go基础—— TypeMeta
    TypeMetahttps://github.com/kubernetes/apimachinery/blob/release-1.27/pkg/runtime/types.go runtime.Object只是一个接口,我们想了解它具体时间怎么实现的。k8s.io/api中的Kubernetes对象通过内嵌k8s.io/apimachinery/meta/v1中的metav1.TypeMeta结构,为schema.Obj......
  • Kubernetes编程——client-go基础—— Go语言中的 Kubernetes 对象介绍
    Go语言中的Kubernetes对象介绍 我们接下来更详细了解在Go语言的语境下的Pod(或者其他任何Kubernetes资源)是什么样的? Kubernetes中的资源(或者更准确说是对象)都是某种类型的实例。我理解意思是说:在Kubernetes中,资源或对象是指由Kubernetes控......
  • Kubernetes编程——修改客户端默认支持 Protobuf
    修改客户端默认支持Protobuf一、在kubernetes客户端中修改默认支持Protobuf确保你已经安装了kubectl命令行工具,并且版本在1.14.0或更高。打开~/.kube/config文件,该文件存储了你的Kubernetes集群配置信息。找到clusters部分,并在你的集群配置下添加extensions字段,示例如下:......
  • kubernetes探针及应用场景
    kubernetes提供了哪几种探针?分别有什么功能?应用场景有哪些?LivenessProbe:容器存活性检查,用于判断容器是否健康。功能:如果LivenessProbe探针探测到容器不健康,则kubelet将删除该容器,并根据容器的重启策略做相应的处理。 如果一个容器不包含LivenessProbe探针,那么kubele......