首页 > 其他分享 >每天一点基础K8S--K8S中的调度策略---污点(Taints)和容忍度(Tolerations)

每天一点基础K8S--K8S中的调度策略---污点(Taints)和容忍度(Tolerations)

时间:2022-11-30 16:11:55浏览次数:48  
标签:node -- worker Taints only master test pod K8S

污点和容忍度

之前的实验测试了调度策略中的nodeName、nodeSelector、节点亲和性和pod亲和性。

有时,为了实现部分pod不能运行在特定的节点上,可以将节点打上污点。此时容忍这个污点的POD还是可以被调度到该节点上
环境中一共有2个master节点,2个worker节点
[root@master-worker-node-1 ~]# kubectl get nodes 
NAME                   STATUS   ROLES           AGE     VERSION
master-worker-node-1   Ready    control-plane   4d20h   v1.25.3
master-worker-node-2   Ready    control-plane   4d19h   v1.25.3
only-worker-node-3     Ready    worker          4d19h   v1.25.3
only-worker-node-4     Ready    worker          4d19h   v1.25.3
正常情况下,新建的pod只能在only-worker-node-3和only-worker-node-4上运行
[root@master-worker-node-1 ~]# kubectl get pods -o wide 
NAME                                   READY   STATUS    RESTARTS        AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
pod-affinity-base-pod                  1/1     Running   29 (20m ago)    10h   10.244.31.11   only-worker-node-3   <none>           <none>
test-node-affinity-2                   1/1     Running   4 (140m ago)    16h   10.244.54.8    only-worker-node-4   <none>           <none>
test-node-affinity-3                   1/1     Running   4 (122m ago)    15h   10.244.54.9    only-worker-node-4   <none>           <none>
test-pod-affinity-by-labelselector     1/1     Running   27 (13m ago)    9h    10.244.54.10   only-worker-node-4   <none>           <none>
test-pod-affinity-by-labelselector-2   1/1     Running   27 (89s ago)    9h    10.244.31.12   only-worker-node-3   <none>           <none>
test-prefered-1                        1/1     Running   9 (5m36s ago)   9h    10.244.54.11   only-worker-node-4   <none>           <none>
test-prefered-2                        1/1     Running   8 (55m ago)     8h    10.244.31.13   only-worker-node-3   <none>           <none>
但是使用kubeadm搭建环境时,etcd、kube-apiserver、kube-controller-manager、kube-scheduler却可以被调度到这两个节点上。
[root@master-worker-node-1 ~]# kubectl get pods -n kube-system -o wide |  grep master-worker
calico-node-49qt2                              1/1     Running   0                4d16h   192.168.122.106   master-worker-node-2   <none>           <none>
calico-node-q2wpg                              1/1     Running   0                4d16h   192.168.122.89    master-worker-node-1   <none>           <none>
etcd-master-worker-node-1                      1/1     Running   5                4d20h   192.168.122.89    master-worker-node-1   <none>           <none>
etcd-master-worker-node-2                      1/1     Running   0                4d19h   192.168.122.106   master-worker-node-2   <none>           <none>
kube-apiserver-master-worker-node-1            1/1     Running   32               4d20h   192.168.122.89    master-worker-node-1   <none>           <none>
kube-apiserver-master-worker-node-2            1/1     Running   1 (4d19h ago)    4d19h   192.168.122.106   master-worker-node-2   <none>           <none>
kube-controller-manager-master-worker-node-1   1/1     Running   10 (4d16h ago)   4d20h   192.168.122.89    master-worker-node-1   <none>           <none>
kube-controller-manager-master-worker-node-2   1/1     Running   0                4d19h   192.168.122.106   master-worker-node-2   <none>           <none>
kube-proxy-7gjz9                               1/1     Running   0                4d20h   192.168.122.89    master-worker-node-1   <none>           <none>
kube-proxy-c4d2m                               1/1     Running   0                4d19h   192.168.122.106   master-worker-node-2   <none>           <none>
kube-scheduler-master-worker-node-1            1/1     Running   8 (4d16h ago)    4d20h   192.168.122.89    master-worker-node-1   <none>           <none>
kube-scheduler-master-worker-node-2            1/1     Running   0                4d19h   192.168.122.106   master-worker-node-2   <none>           <none>
正常创建的pod不能调度到master节点是因为该节点上有污点,而且新建的POD不能容忍这个污点。
[root@master-worker-node-1 ~]# kubectl describe nodes master-worker-node-1 |  grep ^Taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
[root@master-worker-node-1 ~]# kubectl describe nodes master-worker-node-2 |  grep ^Taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
kubeadm搭建环境时创建的pod却可以调度到master-node,是因为这些pod具有容忍度,能够容忍node的污点
[root@master-worker-node-1 ~]# kubectl describe pods -n kube-system kube-scheduler-master-worker-node-1 |  grep ^Tolerations
Tolerations:       :NoExecute op=Exists
污点和容忍度里面的effect字段
NoSchedule
Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler.
不允许将新建的pod调度到含有该节点,除非新建的POD允许该污点。但是允许让停止的pod再次在该节点运行,已经运行的pod不会被调度。


PreferNoSchedule
Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler.
对于新建的pod,在调度的时候会尽力避免该node,对于该节点上已经有的pod不受影响。

NoExecute
Evict any already-running pods that do not tolerate the taint. 
将会把该节点不能容忍这个污点的所有pod(包括运行的),全部驱逐。
污点taints
[root@master-worker-node-1 pod]# cat test-taints.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-busybox
  labels:
    func: test
spec:
  containers:
  - name: test-busybox
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 123456"]

正常被调度到only-worker-node-4
[root@master-worker-node-1 pod]# kubectl get pods -o wide 
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test-busybox   1/1     Running   0          11m   10.244.54.12   only-worker-node-4   <none>           <none>
给only-worker-node-4添加一个NoScheduler的taints
[root@master-worker-node-1 pod]# kubectl taint node only-worker-node-4 test-taints:NoSchedule
node/only-worker-node-4 tainted
[root@master-worker-node-1 pod]# kubectl describe node only-worker-node-4 |  grep ^Taints
Taints:             test-taints:NoSchedule


pod还是运行在了only-worker-node-4上
[root@master-worker-node-1 ~]# kubectl get pods -w -o wide 
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test-busybox   1/1     Running   0          25m   10.244.54.12   only-worker-node-4   <none>           <none>

给only-worker-node-4打上NoExecute标签
[root@master-worker-node-1 pod]# kubectl taint nodes only-worker-node-4 test-taints:NoExecute
node/only-worker-node-4 tainted
[root@master-worker-node-1 pod]# kubectl describe nodes only-worker-node-4 | grep ^Taints
Taints:             test-taints:NoExecute

pod将被终止
[root@master-worker-node-1 ~]# kubectl get pods -w -o wide 
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test-busybox   1/1     Running   0          25m   10.244.54.12   only-worker-node-4   <none>           <none>
test-busybox   1/1     Terminating   0          37m   10.244.54.12   only-worker-node-4   <none>           <none>
test-busybox   1/1     Terminating   0          37m   10.244.54.12   only-worker-node-4   <none>           <none>
test-busybox   0/1     Terminating   0          37m   10.244.54.12   only-worker-node-4   <none>           <none>
test-busybox   0/1     Terminating   0          37m   10.244.54.12   only-worker-node-4   <none>           <none>

容忍度
给only-worker-node-3和only-worker-node-4都打上NoSchedule的标签
[root@master-worker-node-1 pod]# kubectl taint nodes only-worker-node-3 test-taints:NoSchedule
node/only-worker-node-3 tainted
[root@master-worker-node-1 pod]# kubectl taint nodes only-worker-node-4 test-taints:NoSchedule
node/only-worker-node-4 tainted
[root@master-worker-node-1 pod]# cat test-taints-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-busybox
  labels:
    func: test
spec:
  containers:
  - name: test-busybox
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 123456"]
  tolerations:
  - effect: NoSchedule
    key: test-taints
    operator: Exists


pod可以正常调度
[root@master-worker-node-1 pod]# kubectl apply -f test-taints-2.yaml 
pod/test-busybox created
[root@master-worker-node-1 pod]# kubectl get pods -o wide 
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test-busybox   1/1     Running   0          10s   10.244.31.15   only-worker-node-3   <none>           <none>

标签:node,--,worker,Taints,only,master,test,pod,K8S
From: https://www.cnblogs.com/woshinidaye123/p/16938754.html

相关文章

  • web实验四(二)
    web实验四(二)基于华为鲲鹏云服务器CentOS中(或Ubuntu),使用LinuxSocket实现:1.Web服务器的客户端服务器,提交程序运行截图2.实现GET即可,请求,响应要符合HTTP协议规范3.服......
  • 数仓随记
    表全量、增量选择大表变化大---全量大表变化小---增量小表变化大---全量小表变化小---全量查看hdf以gzip压缩的文件hadoopfs-cat/xxxx/xxx.gz|gzip-d......
  • DQL-查询笔记-2022-11-30
    联表查询join连接的表on(判断的条件)连接查询  固定的语法where 等值查询  分析需求 分析查询的数据来源于那些表确定使用那种连接7种确认交叉点(这两张......
  • Spring的AOP简介和Spring中的通知使用方法以及异常
    AOP中关键性概念连接点(Joinpoint):程序执行过程中明确的点,如方法的调用,或者异常的抛出.目标(Target):被通知(被代理)的对象注1:完成具体的业务逻辑通知(Advice):在某个特定的......
  • JAVA爬虫爬取网页数据数据库中,并且去除重复数据
    pom文件<!--添加Httpclient支持--><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId><versio......
  • 企业官网搭建流程-最简配版
    1.申请域名使用华为云服务的​​域名注册服务,新用户1块钱2.本地windows搭建官网方案(wordpress(框架)+astra(主题)+phpstudy(运行环境))搭建官网有两个方式,一种采用云服务器提供......
  • 智能云解析DNS有哪些核心技术?-中科三方
    ​传统解析技术经常出现线路拥堵、解析延迟、遭受DDoS攻击和DNS劫持等问题,已无法满足用户和企业对解析及时性、稳定性和安全性的需求。​​智能云解析DNS​​做为新一代解析......
  • 安全容器在金融的推广策略
    传统容器有一个缺点:容器与宿主机共享内核,这可能会引发严重的安全漏洞问题。理论上,如果您在单个主机上部署了多个容器,一旦其中某个容器被恶意代码利用,由于共享namespace,该......
  • 实验四 Web服务器2
    基于华为鲲鹏云服务器CentOS中(或Ubuntu),使用LinuxSocket实现:1.Web服务器的客户端服务器,提交程序运行截图2.实现GET即可,请求,响应要符合HTTP协议规范3.服务器部署到......
  • ManageEngine 第六次入选 Gartner® 安全信息和事件管理魔力象限™!
    今天,我们很高兴地宣布,ManageEngine已在2022年Gartner安全信息和事件管理(SIEM)魔力象限中获得认可,今年已经是其连续第六次出现在Gartner中。ManageEngine非常高兴再次......