一 Kubernetes Container、Pod、Namespace内存及CPU限制
1.1 限制单位介绍
1 如果运行的容器没有定义资源(memory,cpu)等限制,但是在namesapce定义了LimitRange限制,那么该容器会继承LimitRange中的默认限制
2 如果namespace没有定义LimitRange限制,那么该容器可以只要宿主机的最大可用资源,直到无资源可以而触发主机(OOM killer).
cpu以核心为单位进行限制,单位可以是整核,浮点核心数或毫核(m/milli)
2=2核心=200% 0.5=500m=50% 1.2=1200m=120%
meemory以字节为单位,单位可以是E,P,T,G,M,K,Ei,Pi,Ti,Gi,Mi,Ki
1536Mi=1.5Gi
request(请求)为kubernetes scheduler执行pod调度时node节点至少需要拥有的资源
limit(限制)为pod运行成功后最多可以使用的资源上限。
两者之间的关系
0 <= request <= limit 0是不限制
1.2 不同应用设置不同的资源限制(参考)
nginx #静态服务器
2C/2G
1C/1G
java #动态服务
2C/2G
2c/4G
php 2C/2G
go/python 1C/2G 1C/1G
job/cronjob 0.3/512Mi
elastisearch 4C/12G
mysql 4C/8G
1.3 单个pod的cpu和内存的限制
1.3.1 只针对内存限制,内存最多用256M,CPU无限制
cat case1-pod-memory-limit.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: limit-test-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: limit-test-pod
# matchExpressions:
# - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
template:
metadata:
labels:
app: limit-test-pod
spec:
containers:
- name: limit-test-container
image: lorel/docker-stress-ng #压测镜像
resources:
limits:
memory: "256Mi"
requests:
memory: "100Mi"
#command: ["stress"]
args: ["--vm", "2", "--vm-bytes", "256M"]
查看具体使用的资源
1.3.2 针对内存和cpu都做上限制,cpu最多用1.3核,内存最多512
cat case2-pod-memory-and-cpu-limit.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: limit-test-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels: #rs or deployment
app: limit-test-pod
# matchExpressions:
# - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
template:
metadata:
labels:
app: limit-test-pod
spec:
containers:
- name: limit-test-container
image: lorel/docker-stress-ng
resources:
limits:
cpu: "1.3"
memory: "512Mi"
requests:
memory: "100Mi"
cpu: "500m"
#command: ["stress"]
args: ["--vm", "2", "--vm-bytes", "256M"]
#nodeSelector:
# env: group1
查看资源使用情况
1.4 limitrange的限制
主要针对指定命名空间内所有的pod生效,内容如下
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-magedu
namespace: magedu
spec:
limits:
- type: Container #限制的资源类型
max:
cpu: "2" #限制单个容器的最大CPU
memory: "2Gi" #限制单个容器的最大内存
min:
cpu: "500m" #限制单个容器的最小CPU
memory: "512Mi" #限制单个容器的最小内存
default:
cpu: "500m" #默认单个容器的CPU限制
memory: "512Mi" #默认单个容器的内存限制
defaultRequest:
cpu: "500m" #默认单个容器的CPU创建请求
memory: "512Mi" #默认单个容器的内存创建请求
maxLimitRequestRatio:
cpu: 2 #限制CPU limit/request比值最大为2
memory: 2 #限制内存limit/request比值最大为1.5
- type: Pod
max:
cpu: "4" #限制单个Pod的最大CPU
memory: "4Gi" #限制单个Pod最大内存
- type: PersistentVolumeClaim
max:
storage: 50Gi #限制PVC最大的requests.storage
min:
storage: 30Gi #限制PVC最小的requests.storage
查看具体限制的详情
kubectl get limitranges -n magedu
kubectl describe limitranges limitrange-magedu -n magedu
然后我们尝试在这个命名空间内创建pod,资源使用必须遵循里面的定义,否则无法创建
1.4.1 遵循里面的规则,去创建
[root@k8s-master1 magedu-limit-case]# cat case4-pod-RequestRatio-limit.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-wordpress-deployment-label
name: magedu-wordpress-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-wordpress-selector
template:
metadata:
labels:
app: magedu-wordpress-selector
spec:
containers:
- name: magedu-wordpress-nginx-container
image: nginx:1.16.1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 512Mi
- name: magedu-wordpress-php-container
image: php:5.6-fpm-alpine
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
#cpu: 2
memory: 1Gi
requests:
cpu: 1
memory: 512Mi
1.4.2 不遵守里面的资源限制去创建pod,不满足里面的Max Limit/Request Ratio的比例限制
[root@k8s-master1 magedu-limit-case]# cat case4-pod-RequestRatio-limit.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-wordpress-deployment-label
name: magedu-wordpress-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-wordpress-selector
template:
metadata:
labels:
app: magedu-wordpress-selector
spec:
containers:
- name: magedu-wordpress-nginx-container
image: nginx:1.16.1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.5
memory: 512Mi
- name: magedu-wordpress-php-container
image: php:5.6-fpm-alpine
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
#cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 512Mi
当你apply 的时候,你会发现通过get pod根本看不到这个pod,是因为超出了limitrange的资源限制
如果想查看这个pod具体是什么问题,可以通过下面的命令去查看
kubectl get deployments.apps -n magedu
kubectl get deployments.apps magedu-wordpress-deployment -n magedu -o json
报错如下,内存不满足里面定义的,requests和limit针对内存最大差额为2倍,但是你的已经是4倍了,所以无法创建
1.5 针对整个namespace做资源限制-ResourceQuota
https://kubernetes.io/zh/docs/concepts/policy/resource-quotas/
针对一个namespace去做资源限制,这里面的的cpu和内存,就是把所有node节点的cpu和内存加起来的数量
限制设置如下:
[root@k8s-master1 magedu-limit-case]# cat case6-ResourceQuota-magedu.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota-magedu
namespace: magedu
spec:
hard:
requests.cpu: "8"
limits.cpu: "8"
requests.memory: 4Gi
limits.memory: 4Gi
requests.nvidia.com/gpu: 4
pods: "20" #限制pod数量
services: "6" #限制services数量
查看resourcequotas
kubectl get resourcequotas -n magedu
kubectl describe resourcequotas quota-magedu -n magedu
1.5.1 示例1-满足里面的内存限制
创建3个pod,内存共3g,cpu1.5,满足条件
root@k8s-master1 magedu-limit-case]# cat case8-namespace-cpu-limit-test.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-nginx-deployment-label
name: magedu-nginx-deployment
namespace: magedu
spec:
replicas: 3
selector:
matchLabels:
app: magedu-nginx-selector
template:
metadata:
labels:
app: magedu-nginx-selector
spec:
containers:
- name: magedu-nginx-container
image: nginx:1.16.1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 0.5
memory: 1Gi
requests:
cpu: 0.5
memory: 512Mi
1.5.2 示例2 -演示副本数为5,总内存超出限制,只能创建4个pod
[root@k8s-master1 magedu-limit-case]# cat case8-namespace-cpu-limit-test.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: magedu-nginx-deployment-label
name: magedu-nginx-deployment
namespace: magedu
spec:
replicas: 5
selector:
matchLabels:
app: magedu-nginx-selector
template:
metadata:
labels:
app: magedu-nginx-selector
spec:
containers:
- name: magedu-nginx-container
image: nginx:1.16.1
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 0.2
memory: 1Gi
requests:
cpu: 0.2
memory: 512Mi
提示信息如下:
kubectl get deployments magedu-nginx-deployment -n magedu -o json
二 nodeSelector、nodeName、node亲和与反亲和、pod亲和与反亲和、污点与容忍、驱逐
2.1 关于标签设置
# 打标签
kubectl label node 172.31.7.112 disktype=ssd
kubectl label node 172.31.7.112 project=magedu
#删除标签
kubectl label node 172.31.7.112 disktype-
#查看标签
kubectl get nodes --show-labels=true
kubectl get pod --show-labels
kubectl describe node 172.31.7.110
2.2 nodeselector-节点标签
下面这个pod,必须选择disktype=hdd类型的,否则会不调度,如果不匹配,会有如下提示
[root@k8s-master1 Affinit-case]# cat case1-nodeSelector.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
nodeSelector:
project: magedu
disktype: hdd
2.3 nodename-节点名字
这个相对来说用的比较少,下面这个pod必须要调度到 nodeName: 172.31.7.122这个节点
[root@k8s-master1 Affinit-case]# cat case2-nodename.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
nodeName: 172.31.7.122
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
2.4 node节点亲和和反亲和
2.4.1 硬亲和
多个matchexpressions 只满足其中一个key 就可以,
2.4.1.1 多个matchexpressions演示
[root@k8s-master1 Affinit-case]# cat case3-1.1-nodeAffinity-requiredDuring-matchExpressions.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions: #匹配条件1,多个values可以调度
- key: disktype
operator: In
values:
- hdd # 只有一个value是匹配成功也可以调度
- xxx
- matchExpressions: #匹配条件2,多个matchExpressions加上以及每个matchExpressions values只有其中一个value匹配成功就可以调度
- key: project
operator: In
values:
- mmm #即使这俩条件2的都匹配不上也可以调度
- nnn
2.4.1.2 一个matchExpresions 多个key ,必须两个key 都要满足才可以
如下所示
下面这个实例是一个matchExpresions 多个key,两个key必须都要满足,才可以调度
[root@k8s-master1 Affinit-case]# cat case3-1.2-nodeAffinity-requiredDuring-matchExpressions.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions: #硬亲和匹配条件1
- key: disktype
operator: In
values:
- ssd
- xxx #同个key的多个value只有有一个匹配成功就行
- key: project #硬亲和条件1和条件2必须同时满足,否则不调度
operator: In
values:
- magedu
2.4.2 软亲和
实例演示
优先匹配权重为80的条件,就是project是magedu的,即使两个条件都不满足,pod最终也会调度到其他节点上,这是软亲和的特点。
[root@k8s-master1 Affinit-case]# cat case3-2.1-nodeAffinity-preferredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 80
preference:
matchExpressions:
- key: project
operator: In
values:
- mageduxx
- weight: 60
preference:
matchExpressions:
- key: disktype
operator: In
values:
- hddxx
2.4.3 软亲和和硬亲和结合使用
实例演示
[root@k8s-master1 Affinit-case]# cat case3-2.2-nodeAffinity-requiredDuring-preferredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: #硬亲和
nodeSelectorTerms:
- matchExpressions: #硬匹配条件1
- key: "kubernetes.io/role"
operator: NotIn
values:
- "master" #硬性匹配key 的值kubernetes.io/role不包含master的节点,即绝对不会调度到master节点(node反亲和)
preferredDuringSchedulingIgnoredDuringExecution: #软亲和
- weight: 80
preference:
matchExpressions:
- key: project
operator: In
values:
- magedu
- weight: 60
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
2.4.4 反亲和
实例演示
[root@k8s-master1 Affinit-case]# cat case3-3.1-nodeantiaffinity.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions: #匹配条件1
- key: disktype
operator: NotIn #调度的目的节点没有key为disktype且值为hdd的标签
values:
- hdd #绝对不会调度到含有label的key为disktype且值为hdd的hdd的节点,即会调度到没有key为disktype且值为hdd的hdd的节点
2.5 pod的亲和和反亲和-pod Affinity antiaffinity
2.5.1 软亲和
实例演示-我想让nginx和tomcat调度到同一台node节点
nginx.yaml
[root@k8s-master1 Affinit-case]# cat case4-4.1-nginx.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: python-nginx-deployment-label
name: python-nginx-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: python-nginx-selector
template:
metadata:
labels:
app: python-nginx-selector
project: python
spec:
containers:
- name: python-nginx-container
image: nginx:1.20.2-alpine
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
#imagePullPolicy: IfNotPresent
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
# resources:
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 500m
# memory: 1Gi
---
kind: Service
apiVersion: v1
metadata:
labels:
app: python-nginx-service-label
name: python-nginx-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30014
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 30453
selector:
app: python-nginx-selector
project: python #一个或多个selector,至少能匹配目标pod的一个标签
tomcal.yaml
[root@k8s-master1 Affinit-case]# cat case4-4.2-podaffinity-preferredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
podAffinity:
#requiredDuringSchedulingIgnoredDuringExecution:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: project
operator: In
values:
- python
topologyKey: kubernetes.io/hostname
namespaces:
- magedu
验证pod调度结果
kubectl get pod -n magedu -o wide
2.5.2 硬亲和
实例
nginx.yaml见上一个
[root@k8s-master1 Affinit-case]# cat case4-4.3-podaffinity-requiredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 3
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
podAffinity: #pod亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: project
operator: In
values:
- python
topologyKey: "kubernetes.io/hostname"
namespaces:
- magedu
2.5.3 反亲和
2.5.3.1 硬反亲和
[root@k8s-master1 Affinit-case]# cat case4-4.4-podAntiAffinity-requiredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
podAntiAffinity: #pod反亲和
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: project
operator: In
values:
- python
topologyKey: "kubernetes.io/hostname"
namespaces:
- magedu
2.5.3.2 软反亲和
[root@k8s-master1 Affinit-case]# cat case4-4.5-podAntiAffinity-preferredDuring.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app2-deployment-label
name: magedu-tomcat-app2-deployment
namespace: magedu
spec:
replicas: 1
selector:
matchLabels:
app: magedu-tomcat-app2-selector
template:
metadata:
labels:
app: magedu-tomcat-app2-selector
spec:
containers:
- name: magedu-tomcat-app2-container
image: tomcat:7.0.94-alpine
imagePullPolicy: IfNotPresent
#imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
affinity:
podAntiAffinity: #反亲和
preferredDuringSchedulingIgnoredDuringExecution: #软反亲和
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: project
operator: In
values:
- python
topologyKey: kubernetes.io/hostname
namespaces:
- magedu
2.6 亲和和反亲和总结
硬亲和-绝对会在一起
硬反亲和-绝对不会在一起
软亲和-能在一起一起就在一起
软反亲和-能不在一起就不在一起,如果匹配失败也会在一起
2.7 污点和容忍
https://kubernetes.io/zh/docs/concepts/scheduling-eviction/taint-and-toleration/
污点就是不参与调度,容忍就是即使有污点了,也会调度
污点和容忍:通过污点拒绝大部分,通过容忍允许小部分pod
2.7.1 设置污点
kubectl taint nodes 172.31.7.111 key1=value1:NoExecute #不会打标签但是会配置污点,并立即驱逐pod
kubectl taint nodes 172.31.7.122 key1=value1:NoSchedule #不参与调度,默认master都有这个污点类型
2.7.2 设置容忍
[root@k8s-master1 Affinit-case]# cat case5.1-taint-tolerations.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
labels:
app: magedu-tomcat-app1-deployment-label
name: magedu-tomcat-app1-deployment
namespace: magedu
spec:
replicas: 3
selector:
matchLabels:
app: magedu-tomcat-app1-selector
template:
metadata:
labels:
app: magedu-tomcat-app1-selector
spec:
containers:
- name: magedu-tomcat-app1-container
#image: harbor.magedu.local/magedu/tomcat-app1:v7
image: tomcat:7.0.93-alpine
#command: ["/apps/tomcat/bin/run_tomcat.sh"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
name: http
# env:
# - name: "password"
# value: "123456"
# - name: "age"
# value: "18"
# resources:
# limits:
# cpu: 1
# memory: "512Mi"
# requests:
# cpu: 500m
# memory: "512Mi"
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: magedu-tomcat-app1-service-label
name: magedu-tomcat-app1-service
namespace: magedu
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
#nodePort: 40003
selector:
app: magedu-tomcat-app1-selector
2.7.3 取消污点
kubectl taint nodes 172.31.7.122 key1:NoSchedule-
2.8 驱逐pod
2.8.1 手动驱逐pod
# 第一种
kubectl taint nodes 172.31.7.111 key1=value1:NoExecute #不会打标签,但是会立即驱逐pod
# 第二种
kubectl drain 172.31.7.111 --ignore-daemonsets #驱逐pod并打SchedulingDisabled标签
然后delete node
关机
下线
2.8.2 k8s节点压力驱逐
https://kubernetes.io/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/
这个配置是默认就有的,不需要人为配置。
node节点可用内存小于100M就会驱逐
这些是在kubelet下面配置的 /var/lib/kubelet/config.yaml