API Server接受客户端提交Pod对象创建请求后的操作过程中,有一个重要的步骤是由调度器程序(kube-scheduler)从当前集群中选择一个可用的最佳节点来接收并运行它,通常是默认的调度器(default-scheduler)负责执行此类任务。对于每个待创建的Pod对象来说,调度过程通常分为三个阶段——预选、优选和选定三个步骤,以筛选执行任务的最佳节点。
一、kubernetes调度器
kubernetes系统的核心任务在于创建客户端请求创建的pod对象并确保其以期望的状态运行。创建pod对象时,调度器负责为每一个未经调度的pod资源、基于一系列的规则集从集群中挑选一个合适的节点来运行它。调度过程中,调度器不会修改pod资源,而是从中读取数据,并根据配置的策略挑选出最适合的节点,而后通过API调用将pod绑定至挑选的节点之上以完成调度过程。
kubernetes内建了适合绝大数场景中pod资源调度需求的默认调度器,它核心目标是基于资源可用性将pod资源公平地分布到集群节点上。目前,平台提供的默认调度器通过三个步骤完成调度操作:节点预选,节点优先级排序及节点择优。
1)节点预选:基于一系列预选规则对每个节点进行检查,将那些不符合条件的节点过滤掉从而完成节点预选
2)节点优选:对预选出的节点进行优先级排序,以便选出最合适运行pod对象的节点
3)从优先级排序结果中挑出优先级最高的节点运行pod对象,当此类节点多于一个时,则从中随机挑选一个。
1. 预选策略
预选策略就是节点过滤器。例如节点标签必须能够匹配到pod资源的标签选择器,以及pod容器的资源请求量不能大于节点上剩余的可分配资源等。执行预选操作时,调度器将对每个节点基于配置使用的预选策略以特定的次序逐一筛查,并根据一票否决制进行节点淘汰。若预选后不存在任何一个满足条件的节点,则pod被置于pending状态,直至至少有一个节点可用为止。
CheckNodeCondition:检查是否可以在节点报告磁盘、网络不可用或未准备好的情况下将Pod对象调度于其上。
HostName:若Pod对象拥有spec.hostname属性,则检查节点名称字符串与此属性值是否匹配
PodFitsHostPorts:若pod容器定义了ports.hostPort属性,则检查其值指定的端口是否已被节点上的其他容器或服务占用。
MatchNodeSelector:若pod对象定义了spec.nodeSelector属性,则检查节点标签是否能匹配此属性值
NoDiskConflict:检查pod对象请求的存储卷在此节点是否可用,若不存在冲突则通过检查
PodFitsResources:检查节点是否有足够的资源满足pod对象的运行需求。
等等
2. 优选函数
预选策略筛选并生成一个节点列表后即进入第二阶段的优选过程。在这个过程中,调度器向每个通过预选的节点传递一系列的优选函数(如BalancedResourceAllocation和TaintTolerationPriority)来计算其优先级分值,优先级分值介于0到10之间,其中0表示不适用,10表示最合适托管该pod对象。
另外,调度器还支持为每个优选函数指定一个简单的由正数值表示的权重,进行节点优先级分值的计算时,它首先将每个优选函数的计算得分乘以其权重(大多数优先级的默认权重为1),然后将所有优选函数的得分相加从而得出节点的最终优先级分值。
二、节点亲和调度nodeAffinity
节点亲和性是调度程序用来确定pod对象调度位置的一组规则,这些规则基于节点上的自定义标签和pod对象上指定的标签选择器进行定义。节点亲和性允许pod对象定义针对一组可以调度于其上的节点的亲和性或反亲和性,不过它无法具体到某个特定的节点。
定义节点亲和性规则时有两种类型的节点亲和性规则:硬亲和性和软亲和性。硬亲和性实现的是强制规则,它是pod调度时必须满足的规则,而在不存在满足规则的节点时,pod对象会被置为pending状态。而软亲和性规则实现的是一种柔性调度限制,它倾向于将pod对象运行于某类特定的节点上,而调度器也将尽量满足此需求,但在无法满足调度需求时,它将退而求其次地选择一个不匹配规则的节点。
定义节点亲和性规则的关键点有两个:一个是为节点配置合乎需求的标签,一个是为pod对象定义合理的标签选择器,从而能够基于标签选择器选择出符合期望的目标节点。不过,如preferredDuringSchedulingIgnoredDuringExecution 和requiredDuringSchedulingIgnoredDuringExecution名字中的后半段字符串IgnoredDuringExecution隐含的意义所指,在pod资源基于节点亲和性规则调度至某节点之后,节点标签发生了变化而不再符合此节点亲和性规则时,调度器不会将pod对象从此节点上移出,因为,它仅对新建的pod对象生效。
1. 节点硬亲和性
requiredDuringSchedulingIgnoredDuringExecution
nodeAffinity中支持使用matchExpressions属性构建更为复杂的标签选择机制。如:配置清单示例required-nodeAffinitu-pod.yaml中定义的pod对象,其使用了节点硬亲和规则定义可将当前pod对象调度至拥有zone标签且其值是foo或者bar的节点上。
[root@k8s-master1 ~]# cd nodeAffinity-pod/ [root@k8s-master1 nodeAffinity-pod]# vim required-nodeAffinitu-pod.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 nodeAffinity-pod]# cat required-nodeAffinitu-pod.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - {key: zone,operator: In, values: ["foo","bar"]}
创建上述资源于集群中,由其状态信息可知它处于pending阶段,这是由于强制性的节点亲和限制场景中不存在能够满足匹配条件的节点所致。
[root@k8s-master1 nodeAffinity-pod]# kubectl apply -f required-nodeAffinitu-pod.yaml pod/pod-node-affinity-demo created [root@k8s-master1 nodeAffinity-pod]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo 0/1 Pending 0 20s
通过kubectl describe 命令显示的资源详情信息Events字段给出具体原因“ 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.” 命令及结果显示如下:
[root@k8s-master1 nodeAffinity-pod]# kubectl describe pods pod-node-affinity-demo Name: pod-node-affinity-demo Namespace: default Priority: 0 Node: <none> Labels: app=myapp tier=frontend Annotations: <none> Status: Pending IP: IPs: <none> Containers: myapp: Image: ikubernetes/myapp:v1 Port: <none> Host Port: <none> Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro) Conditions: Type Status PodScheduled False Volumes: default-token-5n29f: Type: Secret (a volume populated by a Secret) SecretName: default-token-5n29f Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 43s (x5 over 3m37s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.
为各节点设置节点标签,这也是设置节点亲和性的前提之一:
[root@k8s-master1 nodeAffinity-pod]# kubectl label node k8s-node1 zone=foo node/k8s-node1 labeled You have new mail in /var/spool/mail/root [root@k8s-master1 nodeAffinity-pod]# kubectl label node k8s-node2 ssd=true node/k8s-node2 labeled [root@k8s-master1 nodeAffinity-pod]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo 1/1 Running 0 6m36s
设置完成后,pod对象pod-node-affinity-demo成功调度到k8s-node1上,具体信息如下:
[root@k8s-master1 nodeAffinity-pod]# kubectl describe pods pod-node-affinity-demo Name: pod-node-affinity-demo Namespace: default Priority: 0 Node: k8s-node1/10.0.0.132 Start Time: Sun, 04 Sep 2022 15:38:17 +0800 Labels: app=myapp tier=frontend Annotations: cni.projectcalico.org/podIP: 10.244.36.93/32 cni.projectcalico.org/podIPs: 10.244.36.93/32 Status: Running IP: 10.244.36.93 IPs: IP: 10.244.36.93 Containers: myapp: Container ID: docker://dd560887823602f11efeebbeadfb4abc7530d17ef00c3498128efca64bffe648 Image: ikubernetes/myapp:v1 Image ID: docker://sha256:d4a5e0eaa84f28550cb9dd1bde4bfe63a93e3cf88886aa5dad52c9a75dd0e6a9 Port: <none> Host Port: <none> State: Running Started: Sun, 04 Sep 2022 15:38:19 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-5n29f: Type: Secret (a volume populated by a Secret) SecretName: default-token-5n29f Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m49s (x7 over 8m13s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity. Normal Scheduled 2m26s default-scheduler Successfully assigned default/pod-node-affinity-demo to k8s-node1 Normal Pulled 2m24s kubelet Container image "ikubernetes/myapp:v1" already present on machine Normal Created 2m24s kubelet Created container myapp Normal Started 2m24s kubelet Started container myapp
在定义亲和性时,requiredDuringSchedulingIgnoredDuringExecution字段的值是一个对象列表,用于定义节点硬亲和性,它可由一个到多个nodeSelectorTerm定义对象组成,彼此间为逻辑或的关系,进行匹配度检查时,在多个nodeSelectorTerm之间只要满足其中之一即可。
下面资源配置清单示例中定义了调度拥有两个标签选择器的节点挑选条目,两个标签选择器彼此间为逻辑与的关系,因此,满足条件的节点为k8s-node2。
[root@k8s-master1 nodeAffinity-pod]# cat required-nodeAffinitu-pod1.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo-1 namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - {key: zone, operator: In, values: ["foo","bar"]} - {key: ssd, operator: Exists, values: []}
[root@k8s-master1 nodeAffinity-pod]# kubectl apply -f required-nodeAffinitu-pod1.yaml pod/pod-node-affinity-demo-1 created [root@k8s-master1 nodeAffinity-pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-node-affinity-demo-1 1/1 Running 0 16s 10.244.169.134 k8s-node2 <none> <none>
构建标签选择器表达式中支持使用操作符有In,NotIn,Exists,DoesNotExist,Lt和Gt等。
另外,调度器在调度Pod资源时,节点亲和性仅是其节点预选策略中遵循的预选机制之一,其他配置使用的预选策略依然正常参与节点预选过程。
2. 节点软亲和性
preferredDuringSchedulingIgnoredDuringExecution
节点软亲和性为节点选择机制提供了一种柔性控制逻辑,被调度的pod对象不再是“必须”而是应该放置于某些特定节点之上,当条件不满足时,它也能够接受被编排于其他不符合条件的节点上。另外,它还为每种倾向性提供了weight属性以便用户定义其优先级,取值范围是1~100,数字越大优先级越高。
查看当前节点标签
[root@k8s-master1 nodeAffinity-pod]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master1 Ready control-plane,master 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master= k8s-node1 Ready worker 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker k8s-node2 Ready worker 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ceph,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker
pod资源定义了节点软亲和性以选择运行在拥有zone=foo或bar标签的节点上。
[root@k8s-master1 nodeAffinity-pod]# vim preferred-nodeAffinitu-pod.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 nodeAffinity-pod]# cat preferred-nodeAffinitu-pod.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo-2 namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp-1 image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 60 preference: matchExpressions: - {key: zone, operator: In, values: ["foo","bar"]}
创建pod资源对象:
[root@k8s-master1 nodeAffinity-pod]# kubectl apply -f preferred-nodeAffinitu-pod.yaml pod/pod-node-affinity-demo-2 created [root@k8s-master1 nodeAffinity-pod]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo-1 1/1 Running 0 103m pod-node-affinity-demo-2 1/1 Running 0 7s [root@k8s-master1 nodeAffinity-pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-node-affinity-demo-1 1/1 Running 0 104m 10.244.169.134 k8s-node2 <none> <none> pod-node-affinity-demo-2 1/1 Running 0 84s 10.244.36.94 k8s-node1 <none> <none>
上面示例说明软亲和性是可以运行这个pod的,尽管没有运行这个pod的节点定义的zone标签
Node节点亲和性针对的是pod和node的关系,Pod调度到node节点的时候匹配的条件。
三、pod资源亲和调度
pod自身的亲和性调度有两种表示形式:
podaffinity:pod和pod更倾向腻在一起,把相近的pod结合到相近的位置,如同一区域,同一机架,这样的话pod和pod之间更好通信,此时可以将这些pod对象间的关系称为亲和性。
podunaffinity:pod和pod更倾向不腻在一起,如果部署两套程序,那么这两套程序更倾向于反亲和性,这样相互之间不会有影响,此时可以将这些pod对象间的关系称之为反亲和性。
pod亲和性调度和反亲和性调度的功用是允许调度器把第一个pod放置于任何位置,而后与其有亲和性或反亲和性关系的pod据此动态完成位置编排。
kubernetes调度器通过内建的MatchInterPodAffinity预选策略为这种调度方式完成节点预选,并基于InterPodAffinityPriority优选函数进行各节点的优选评估。
1. 位置拓扑
pod亲和性调度需要各相关的pod对象运行于“同一位置”,而反亲和性调度则要求它们不能运行于“同一位置”。何谓同一位置?它们取决于节点的位置拓扑,拓扑的方式不同,判定的结果也不同。如果以基于各节点的kubernetes.io/hostname标签作为评判标准,那么“同一位置”意味着同一个节点,不同的节点即不同的位置。
因此,在定义pod对象的亲和性与反亲和性时,需要借助于标签选择器来选择被依赖的pod对象,并根据选出的pod对象所在节点的标签来判定“同一位置”的具体意义。
2. pod硬亲和性
pod强制约束的亲和性调度也使用requiredDuringSchedulingIgnoredDuringExecution属性进行定义。pod亲和性用于描述一个pod对象与具有某特征的现存pod对象运行位置的依赖关系,因此,测试使用pod亲和性标签,需要事先存在被依赖的pod对象,它们具有特别的标签。
示例:定义两个pod,第一个pod做为基准,第二个pod跟着它走
[root@k8s-master1 ~]# mkdir podAffinity [root@k8s-master1 ~]# cd podAffinity/ [root@k8s-master1 podAffinity]# vim pod-required-affinity-demo.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 podAffinity]# cat pod-required-affinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: backend tier: db spec: containers: - name: busybox image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app, operator: In, values: ["myapp"]} topologyKey: kubernetes.io/hostname
上面表示创建的第二个pod必须与拥有app=myapp标签的pod在一个节点上。通过标签选择器定义的标签挑选感兴趣的现存pod对象,而后根据挑选出的pod对象所在的节点标签“kubernetes.i/hostname”【值为当前节点的节点主机名称标识】来判断同一位置的具体含义,并将当前pod对象调度至这一位置的某节点上。
[root@k8s-master1 podAffinity]# kubectl apply -f pod-required-affinity-demo.yaml pod/pod-first created pod/pod-second created [root@k8s-master1 podAffinity]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-first 1/1 Running 0 9s 10.244.36.95 k8s-node1 <none> <none> pod-second 1/1 Running 0 9s 10.244.36.96 k8s-node1 <none> <none>
结果显示说明第一个pod调度到哪,第二个pod也调度到哪,这就是pod节点亲和性。
调度器首先会基于标签选择器查询拥有标签“app=myapp”的所有pod资源,接着获取到它们分别所属的节点的节点主机名标识,接下来再查询拥有匹配这些标签值的的所有节点,从而完成节点的预选,而后根据优选函数计算这些节点的优先级,从而挑选出运行新建pod对象的节点。
需要注意的是,如果节点标签在运行时发生了更改,以致它不再满足pod上的亲和性规则,但是该pod还将继续在该节点上运行,因此它只会影响新建的pod资源。另外,labelSelector属性仅匹配与被调度器的pod在同一名称空间中的pod资源,不过也可以通过为其添加namespace字段以指定其他名称空间。
3. pod软亲和性
类似节点亲和性机制,pod也支持使用preferredDuringSchedulingIgnoredDuringExecution属性定义柔性亲和机制,调度器会尽力确保满足亲和约束条件的调度逻辑,然而在约束条件不能满足时,也允许将pod对象调度至其他节点运行。
[root@k8s-master1 podAffinity]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-first 1/1 Running 0 19m app=myapp,tier=frontend pod-second 1/1 Running 0 19m app=backend,tier=db [root@k8s-master1 podAffinity]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master1 Ready control-plane,master 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master= k8s-node1 Ready worker 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker k8s-node2 Ready worker 33d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ceph,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker
使用pod软亲和性调度机制的资源配置清单示例:
[root@k8s-master1 podAffinity]# vim pod-preferred-affinitu-demo.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 podAffinity]# cat pod-preferred-affinitu-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-thrid labels: app: myapp1 tier: db spec: containers: - name: busybox image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 80 podAffinityTerm: labelSelector: matchExpressions: - {key: app, operator: In, values: ["cache"]} topologyKey: zone - weight: 20 podAffinityTerm: labelSelector: matchExpressions: - {key: app, operator: In, values: ["db"]} topologyKey: zone
定义了两组亲和性判定机制,一个是选择cache pod所在节点的zone标签,并赋予了较高的权重 80,另一个是选择db pod所在的节点zone标签,它有着略低的权重20。
[root@k8s-master1 podAffinity]# kubectl apply -f pod-preferred-affinitu-demo.yaml pod/pod-thrid created You have new mail in /var/spool/mail/root [root@k8s-master1 podAffinity]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-first 1/1 Running 0 21m 10.244.36.95 k8s-node1 <none> <none> pod-second 1/1 Running 0 21m 10.244.36.96 k8s-node1 <none> <none> pod-thrid 1/1 Running 0 7s 10.244.169.135 k8s-node2 <none> <none>
根据pod标签和node标签,不能满足上述的调度机制,但是pod对象依然创建成功,这就是pod的软亲和性机制。
4. pod反亲和性
反亲和性调度一般用于分散同一类应用的pod对象等,也包括将不同安全级别的pod对象调度至不同的区域,机架或节点等。
示例:定义两个pod,第一个pod做为基准,第二个pod跟它调度节点相反
[root@k8s-master1 podAffinity]# cat pod-required-anti-affinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app1: myapp1 tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: backend tier: db spec: containers: - name: busybox image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app1, operator: In, values: ["myapp1"]} topologyKey: kubernetes.io/hostname [root@k8s-master1 podAffinity]# kubectl apply -f pod-required-anti-affinity-demo.yaml pod/pod-first created pod/pod-second created [root@k8s-master1 podAffinity]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-first 1/1 Running 0 8s 10.244.36.99 k8s-node1 <none> <none> pod-second 1/1 Running 0 8s 10.244.169.136 k8s-node2 <none> <none>
显示两个pod不在一个node节点上,这就是pod节点反亲和性。
更换位置拓扑标签,为工作节点打zone标签。
[root@k8s-master1 podAffinity]# kubectl label nodes k8s-node1 zone=foo node/k8s-node1 labeled
[root@k8s-master1 podAffinity]# kubectl label nodes k8s-node2 zone=foo --overwrite node/k8s-node2 labeled You have new mail in /var/spool/mail/root [root@k8s-master1 podAffinity]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master1 Ready control-plane,master 34d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master= k8s-node1 Ready worker 34d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker,zone=foo k8s-node2 Ready worker 34d v1.20.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ceph,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=worker,zone=foo
因为k8s-node1和k8s-node2都有zone标签,属于同一位置。
[root@k8s-master1 podAffinity]# cat pod-required-anti-affinity-demo1.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app2: myapp2 tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: backend tier: db spec: containers: - name: busybox image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app2, operator: In, values: ["myapp2"]} topologyKey: zone [root@k8s-master1 podAffinity]# kubectl apply -f pod-required-anti-affinity-demo1.yaml pod/pod-first created pod/pod-second created [root@k8s-master1 podAffinity]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-first 1/1 Running 0 8s pod-second 0/1 Pending 0 8s
第二个节点现是pending,因为两个节点是同一个位置,现在没有不是同一个位置的了,而且要求反亲和性,所以就会处于pending状态。如果在反亲和性这个位置把required改成preferred,那么也会运行。
[root@k8s-master1 podAffinity]# cat pod-required-anti-affinity-demo1.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app2: myapp2 tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: backend tier: db spec: containers: - name: busybox image: busybox:1.28 imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 60 podAffinityTerm: labelSelector: matchExpressions: - {key: app2, operator: In, values: ["myapp2"]} topologyKey: zone You have new mail in /var/spool/mail/root [root@k8s-master1 podAffinity]# kubectl apply -f pod-required-anti-affinity-demo1.yaml pod/pod-first created pod/pod-second created [root@k8s-master1 podAffinity]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-first 1/1 Running 0 8s 10.244.36.103 k8s-node1 <none> <none> pod-second 1/1 Running 0 8s 10.244.169.138 k8s-node2 <none> <none>
所以pod反亲和性调度也支持使用柔性约束机制,在调度时,它将尽可能满足不把位置相斥的pod对象调度到同一个位置,但是,当约束关系无法满足时,也可以违反约束而调度。
四、污点和容忍度
污点是定义在节点上的键值型属性数据,用于让节点拒绝将pod调度运行于其上,除非该pod对象具有接纳节点污点的容忍度。而容忍度是定义在pod对象上的键值型属性数据,用于配置其可容忍的节点污点,而且调度器仅能将pod对象调度至其能够容忍该节点污点的节点之上。
之前的节点选择器和节点亲和性两种调度方式都是通过在pod对象上添加标签选择器来完成对特定类型节点标签的匹配,它们实现的是由pod选择节点机制。而污点和容忍度则是通过向节点添加污点信息来控制pod对象的调度结果,从而赋予了节点控制何种pod对象能够调度于其上的主控权。即:节点亲和性使得pod对象被吸引到一类特定节点,而污点则相反,它提供了让节点排斥特定的pod对象的能力。
kubernetes使用PodToleratesNodeTaints预选策略和TaintTolerationPriority优选函数来完成此种类型的高级调度机制。
1. 定义污点和容忍度
污点定义在节点的nodeSpec中,而容忍度则定义在pod的podSpec中,它们都是键值型数据,但又都额外支持一个效果标记,语法格式为“key=value:effect”,其中key和value的用法及格式与资源注解信息相似,而effect则用于定义对pod对象的排斥等级,它主要包含以下三种类型:
NoSchedule:不能容忍此污点的新pod对象不可调度至当前节点,属于强制型约束关系,节点上现存的pod对象不受影响
PreferNoSchedule:NoSchedule的柔性约束版本,即不能容忍此污点的新pod对象尽量不要调度至当前节点,不过无其他节点可供调度时也允许接受相应的pod对象,节点上现存的pod对象不受影响。
NoExecute:不能容忍此污点的新pod对象不可调度至当前节点,属于强直型约束关系,而且节点上现存的pod对象因节点污点变动或pod容忍变动而不再满足匹配规则时,pod对象将被驱逐。
此外,在pod对象上定义容忍度时,它支持两种操作符:一种是等值比较(Equal)表示容忍度与污点必须在key、value和effect三者之上完全匹配;另一种是存在性判断(Exists),表示二者的key和effect必须完全匹配,而容忍度中的value字段要使用空值。
在pod上定义的容忍度可能不止一个,在节点上定义的污点可能多个,需要琢个检查容忍度和污点能否匹配,每一个污点都能被容忍,才能完成调度,如果不能容忍怎么办,那就需要看pod的容忍度了
使用kubeadm部署k8s集群,其master节点将自动添加污点信息以阻止不能容忍此污点的pod对象调度至此节点上。因此手动创建的未特意添加容忍此污点
容忍度的pod对象将不会被调度至此节点。
[root@k8s-master1 ~]# kubectl describe nodes k8s-master1 Name: k8s-master1 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-master1 kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 10.0.0.131/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.159.128 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Aug 2022 20:44:57 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: k8s-master1 AcquireTime: <unset> RenewTime: Sun, 04 Sep 2022 22:49:45 +0800
可以看到k8s-master1这个节点的污点是Noschedule。所以创建的pod都不会调度到master上,因为我们创建的pod没有容忍度。
[root@k8s-master1 ~]# kubectl describe pods kube-apiserver-k8s-master1 -n kube-system Name: kube-apiserver-k8s-master1 Namespace: kube-system ... QoS Class: Burstable Node-Selectors: <none> Tolerations: :NoExecute op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 28m (x6 over 6h6m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500 Warning Unhealthy 28m (x12 over 11h) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
可以看到这个pod的容忍度是NoExecute,则可以调度到k8s-master1上
[root@k8s-master1 ~]# kubectl get pods kube-apiserver-k8s-master1 -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-apiserver-k8s-master1 1/1 Running 6 34d 10.0.0.131 k8s-master1 <none> <none>
2. 管理节点的污点
任何符合其键值规范要求的字符串均可用于定义污点信息。命令语法格式如下:
[root@k8s-master1 ~]# kubectl taint nodes <node-name> <key>=<value>:<effect>
如:把k8s-node2当成是生产环境专用的,其他node是测试的
[root@k8s-master1 ~]# kubectl taint node k8s-node2 node-type=production:NoSchedule node/k8s-node2 tainted You have new mail in /var/spool/mail/root [root@k8s-master1 ~]# kubectl describe nodes k8s-node2 Name: k8s-node2 Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux disk=ceph kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node2 kubernetes.io/os=linux node-role.kubernetes.io/worker=worker zone=foo Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 10.0.0.133/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.169.128 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Aug 2022 20:47:27 +0800 Taints: node-type=production:NoSchedule Unschedulable: false ...
给k8s-node2打污点,pod如果不能容忍就不会调度过来
[root@k8s-master1 ~]# mkdir taints You have new mail in /var/spool/mail/root [root@k8s-master1 ~]# cd taints/ [root@k8s-master1 taints]# vim pod-taint.yaml [root@k8s-master1 taints]# cat pod-taint.yaml apiVersion: v1 kind: Pod metadata: name: taint-pod namespace: default labels: tomcat: tomcat-pod spec: containers: - name: taint-pod ports: - containerPort: 8080 image: tomcat:8.5-jre8-alpine imagePullPolicy: IfNotPresent [root@k8s-master1 taints]# kubectl apply -f pod-taint.yaml pod/taint-pod created You have new mail in /var/spool/mail/root [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES taint-pod 1/1 Running 0 8s 10.244.36.104 k8s-node1 <none> <none>
可以看到都被调度到k8s-node1上了,因为k8s-node2这个节点打了污点,而在创建pod的时候没有容忍度,所以k8s-node2上不会有pod调度上去的 .
给k8s-node1上也打上污点
[root@k8s-master1 taints]# kubectl taint nodes k8s-node1 node-type=dev:NoExecute node/k8s-node1 tainted [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES taint-pod 0/1 Terminating 0 3m42s 10.244.36.104 k8s-node1 <none> <none> [root@k8s-master1 taints]# kubectl get pods -o wide No resources found in default namespace.
上面可以看到已经存在的pod节点都被撵走了
[root@k8s-master1 taints]# vim pod-taint-demo-1.yaml [root@k8s-master1 taints]# cat pod-taint-demo-1.yaml apiVersion: v1 kind: Pod metadata: name: myapp-pod namespace: default labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Equal" value: "production" effect: "NoExecute" tolerationSeconds: 3600 [root@k8s-master1 taints]# kubectl apply -f pod-taint-demo-1.yaml pod/myapp-pod created [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-pod 0/1 Pending 0 8s <none> <none> <none> <none> You have new mail in /var/spool/mail/root
显示pending,因为使用的是equal(等值匹配),所以key和value,effect必须和node节点定义的污点完全匹配才可以,把上面配置effect: "NoExecute"变成effect: "NoSchedule"; tolerationSeconds: 3600这行去掉。
[root@k8s-master1 taints]# kubectl delete pod myapp-pod
[root@k8s-master1 taints]# cat pod-taint-demo-1.yaml apiVersion: v1 kind: Pod metadata: name: myapp-pod namespace: default labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Equal" value: "production" effect: "NoSchedule" [root@k8s-master1 taints]# kubectl apply -f pod-taint-demo-1.yaml pod/myapp-pod created You have new mail in /var/spool/mail/root [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-pod 1/1 Running 0 7s 10.244.169.139 k8s-node2 <none> <none>
修改后就可以调度到k8s-node2上了,因为在pod中定义的容忍度能容忍node节点上的污点.
再次修改pod的容忍度,只要对应的键是存在的,exists,其值被自动定义成通配符
[root@k8s-master1 taints]# cat pod-taint-demo-1.yaml apiVersion: v1 kind: Pod metadata: name: myapp-pod namespace: default labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Exists" value: "" effect: "NoSchedule" [root@k8s-master1 taints]# kubectl delete pod myapp-pod pod "myapp-pod" deleted [root@k8s-master1 taints]# kubectl apply -f pod-taint-demo-1.yaml pod/myapp-pod created [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-pod 1/1 Running 0 4s 10.244.169.140 k8s-node2 <none> <none>
发现还是调度到k8s-node2上
再次修改容忍度,有一个node-type的键,不管值是什么,不管是什么效果,都能容忍。
[root@k8s-master1 taints]# vim pod-taint-demo-1.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 taints]# cat pod-taint-demo-1.yaml apiVersion: v1 kind: Pod metadata: name: myapp-pod namespace: default labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Exists" value: "" effect: "" [root@k8s-master1 taints]# kubectl delete pod myapp-pod pod "myapp-pod" deleted [root@k8s-master1 taints]# kubectl apply -f pod-taint-demo-1.yaml pod/myapp-pod created [root@k8s-master1 taints]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-pod 1/1 Running 0 4s 10.244.36.105 k8s-node1 <none> <none>
可以看到k8s-node1和k8s-node2节点上都有可能有pod被调度。
删除污点命令:kubectl taint nodes <node-name> <key>[:<effect>]-
如:删除k8s-node1上的node-type键的效用标识为“NoExecute”的污点信息
[root@k8s-master1 taints]# kubectl taint nodes k8s-node1 node-type:NoExecute- node/k8s-node1 untainted You have new mail in /var/spool/mail/root [root@k8s-master1 taints]# kubectl describe node k8s-node1 Name: k8s-node1 Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node1 kubernetes.io/os=linux node-role.kubernetes.io/worker=worker zone=foo Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 10.0.0.132/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.36.64 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Aug 2022 20:48:09 +0800 Taints: <none> Unschedulable: false ...
删除节点上的全部污点信息,通过kubectl patch命令将节点属性spec.taints的值直接置为空即可。
如:将k8s-node2上的全部污点删除
[root@k8s-master1 taints]# kubectl patch nodes k8s-node2 -p '{"spec":{"taints":[]}}' node/k8s-node2 patched [root@k8s-master1 taints]# kubectl describe node k8s-node2 Name: k8s-node2 Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux disk=ceph kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node2 kubernetes.io/os=linux node-role.kubernetes.io/worker=worker zone=foo Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 10.0.0.133/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.169.128 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 01 Aug 2022 20:47:27 +0800 Taints: <none> Unschedulable: false ...
标签:master1,kubernetes,调度,节点,io,pod,k8s,Pod,资源 From: https://www.cnblogs.com/jiawei2527/p/16655258.html