资源限制limit、request
容器资源限制:
- resources.limits.cpu
- resources.limit.memory
容器使用的最小资源要求,作为容器调度时资源分配的依据:
- resources.requests.cpu
- resources.requests.memory
k8s会根据request的值五去查找有足够资源的node来调度pod
CPU单位:可以写m也可以写浮点数,例如0.5=500m,1=1000m
apiVersion: v1
kind: pod
metadata:
name: my-pod
spec:
containers:
- name: web
image: nginx
resoures:
request:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
limits和request的配置简单,但是设置这些值是有讲究的
- limits的值,不能超过节点的配置
- request的值不能超过limits的值
- request是最小资源需求,是一种预留性质的,也就是不是写多少,就会占用宿主机多少资源
- 如果request设置过高就会导致可用节点少,资源空闲,反之过少就会导致pod多,资源饱和
- 当request的值在节点不能满足时,pod处于pending状态
查看一个节点的资源占用情况,会显示每个pod的request和limit以及总和
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-kube-controllers-d4bfdcb9-vss4q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d
kube-system calico-node-7pwmf 250m (8%) 0 (0%) 0 (0%) 0 (0%) 14d
kube-system coredns-6d8c4cb4d-2sqgp 100m (3%) 0 (0%) 70Mi (2%) 170Mi (5%) 14d
kube-system coredns-6d8c4cb4d-5q9ks 100m (3%) 0 (0%) 70Mi (2%) 170Mi (5%) 14d
kube-system etcd-k8s-master 100m (3%) 0 (0%) 100Mi (3%) 0 (0%) 14d
kube-system kube-apiserver-k8s-master 250m (8%) 0 (0%) 0 (0%) 0 (0%) 14d
kube-system kube-controller-manager-k8s-master 200m (6%) 0 (0%) 0 (0%) 0 (0%) 14d
kube-system kube-proxy-6m547 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14d
kube-system kube-scheduler-k8s-master 100m (3%) 0 (0%) 0 (0%) 0 (0%) 14d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1100m (36%) 0 (0%)
memory 240Mi (7%) 340Mi (10%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
NodeSelector: 将pod和node进行绑定
用于将pod调度到匹配Label标签的节点上,如果没有匹配的标签就会调度失败
作用:
- 约束pod到特定的节点上运行
- 完全匹配标签,一旦匹配不到就调度失败
应用场景:
- 专用节点:根据业务线将node分组管理(比如:A业务线的pod需要全部调度到有 test:aa的节点上去)
- 匹配特殊的硬件:部分Node有ssd盘
这样的配置意思是这个pod永远只能在携带了disktype:ssd标签的node上运行,否则将调度失败。
NodeName,一旦pod的这个字段被赋值,k8s就会认为这个pod已经调度成功,NodeName的值就是node节点名称。这个字段一般由调度器负责设置,测试的时候也可以人为设置。
示例:确保pod分配到有ssd硬盘的节点上
1、给节点添加标签
kubectl label nodes <node_name> <label-key>=<lable-name>
kubectl label nodes k8s-node1 disktype=ssd
# kubectl get nodes --show-labels 查看节点标签
2、添加nodeSelect字段到pod中
apiVersion: v1
kind: Pod
metadata:
name: test1
spec:
nodeSelector:
disktype: "ssd"
containers:
- name: nginx
image: nginx:v1.18
3、查看pod节点分配
kubectl get pod -A -o wide
# 删除节点标签 kubectl label node k8s-node1 <lable-key>-
kubectl label nodes k8s-node1 disktype-
nodeAffinity 节点亲和
节点亲和类似于nodeSelector,也是用于根据node上的标签把pod调度到指定节点上
相比nodeSelector:
- 有更多的匹配逻辑选择,不仅仅是完全匹配
In、NotIn、Exists、DoesNotExist、Gt、Lt
- 调度策略分为软策略和硬策略
硬策略(required): 必须满足
软策略(preferred): 尝试满足,但不保证
apiVersion: v1
kind: Pod
metadata:
name: nginx1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # 硬策略
nodeSelectorTerms:
- matchExpressions:
- key: disktype # key类型
operator: In # 策略逻辑
values: # 值
- ssd
containers:
- name: nginx
image: nginx
apiVersion: v1标签:request,调度,system,0%,pod,kube,节点,属性 From: https://blog.51cto.com/landandan/6108615
kind: Pod
metadata:
name: nginx55
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 1 # 权重
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd88
containers:
- name: nginx
image: nginx