首页 > 其他分享 >名称空间,亲和性,pod生命周期,健康检查

名称空间,亲和性,pod生命周期,健康检查

时间:2024-06-16 16:33:02浏览次数:11  
标签:亲和性 test Running master node1 健康检查 pod root

一、名称空间

1、切换名称空间

[root@master pod]# kubectl create ns test
namespace/test created
[root@master pod]# kubectl get ns
NAME              STATUS   AGE
default           Active   10h
kube-node-lease   Active   10h
kube-public       Active   10h
kube-system       Active   10h
test              Active   2s
[root@master pod]# kubectl config set-context --current --namespace=kube-system
Context "kubernetes-admin@kubernetes" modified.
[root@master pod]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS        AGE
calico-kube-controllers-d886b8fff-mbdz7   1/1     Running   0               6h42m
calico-node-48tnk                         1/1     Running   0               6h46m
calico-node-jq7mr                         1/1     Running   0               6h46m
calico-node-pdwcr                         1/1     Running   0               6h46m
coredns-567c556887-99cqw                  1/1     Running   1 (6h44m ago)   10h
coredns-567c556887-9sbfp                  1/1     Running   1 (6h44m ago)   10h
etcd-master                               1/1     Running   1 (6h44m ago)   10h
kube-apiserver-master                     1/1     Running   1 (6h44m ago)   10h
kube-controller-manager-master            1/1     Running   1 (6h44m ago)   10h
kube-proxy-7dl5r                          1/1     Running   1 (6h50m ago)   10h
kube-proxy-pvbrg                          1/1     Running   1 (6h44m ago)   10h
kube-proxy-xsqt9                          1/1     Running   1 (6h50m ago)   10h
kube-scheduler-master                     1/1     Running   1 (6h44m ago)   10h
[root@master pod]# kubectl config set-context --current --namespace=default
Context "kubernetes-admin@kubernetes" modified.
[root@master pod]# kubectl get pod
NAME     READY   STATUS    RESTARTS   AGE
nginx1   1/1     Running   0          8m44s

2、设置名称空间资源限额

  1. 就是不能超过这个名称空间的限制

  2. 限制这个名称空间所有pod的类型的限制

[root@master ns]# cat test.yaml 
apiVersion: v1
kind: ResourceQuota  #这个是资源配额
metadata:
  name: mem-cpu-qutoa
  namespace: test  
spec:
  hard:  #限制资源
     requests.cpu: "2" #最少2个cpu 
     requests.memory: 2Gi 
     limits.cpu: "4"   #最大4个cpu
     limits.memory: 4Gi

#查看名称空间详细信息
[root@master ns]# kubectl describe ns test
Name:         test
Labels:       kubernetes.io/metadata.name=test
Annotations:  <none>
Status:       Active

Resource Quotas
  Name:            mem-cpu-qutoa
  Resource         Used  Hard
  --------         ---   ---
  limits.cpu       0     4
  limits.memory    0     4Gi
  requests.cpu     0     2
  requests.memory  0     2Gi

No LimitRange resource.

#定义了名称空间限制的话,创建Pod必须设置资源限制,否则会报错
[root@master pod]# cat nginx.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx1
  namespace: test
  labels:
    app: nginx-pod
spec:
 containers:
 - name: nginx01
   image: docker.io/library/nginx:1.9.1
   imagePullPolicy: IfNotPresent
   resources:  #pod资源的限制,如果不做限制的话,pod出现了问题的话,一直吃内存的话,就会出现问题
     limits:
       memory: "2Gi"  #内存为2g
       cpu: "2m"  #单位为毫核,1000m=1核      

二、标签

  1. 这个非常的重要,因为很多的资源类型都是靠这个标签进行管理的(识别到了)

  2. 服务或者控制器等都是靠这个标签来进行管理的

#打上标签
[root@master /]# kubectl label pods nginx1 test=01
pod/nginx1 labeled
[root@master /]# kubectl get pod --show-labels 
NAME     READY   STATUS    RESTARTS   AGE   LABELS
nginx1   1/1     Running   0          45m   app=nginx-pod,test=01

#具有这个标签的pod进行列出
[root@master /]# kubectl get pods -l app=nginx-pod
NAME     READY   STATUS    RESTARTS   AGE
nginx1   1/1     Running   0          48m

#查看所有名称空间和标签
[root@master /]# kubectl get pods --all-namespaces --show-labels 

#查看这个键app对应的值是什么
[root@master /]# kubectl get pods -L app
NAME     READY   STATUS    RESTARTS   AGE   APP
nginx1   1/1     Running   0          50m   nginx-pod

#删除这个标签
[root@master ~]# kubectl label pod nginx1 app-
pod/nginx1 unlabeled
[root@master ~]# kubectl get pod --show-labels 
NAME     READY   STATUS    RESTARTS   AGE   LABELS
nginx1   1/1     Running   0          57m   test=01
s

三、亲和性

1、node节点选择器

就是根据主机名或者标签进行pod的调度,属于强制性的调度,不存在的也能进行调度,是pending的状态

1、nodename

[root@master pod]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: test
spec:
  nodeName: node1  #调度到node1主机上面
  containers:
    - name: pod1
      image: docker.io/library/nginx
      imagePullPolicy: IfNotPresent 

[root@master pod]# kubectl get pod -n test -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
nginx1   1/1     Running   0          12h   10.244.104.5     node2   <none>           <none>
pod1     1/1     Running   0          34s   10.244.166.130   node1   <none>           <none>

2、nodeselector

#给主机名打上标签,以便进行调度
[root@master ~]# kubectl label nodes node1 app=node1
node/node1 labeled
[root@master ~]# kubectl get nodes node1 --show-labels 
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node1   Ready    <none>   23h   v1.26.0   app=node1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux

[root@master pod]# cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod2
  namespace: test
spec:
  nodeSelector:  #根据主机名的标签进行调度
    app: node1   #这种键值的形式来表现出来
  containers:
  - name: pod2
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent   

[root@master pod]# kubectl get pod -n test -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
nginx1   1/1     Running   0          12h     10.244.104.5     node2   <none>           <none>
pod1     1/1     Running   0          9m28s   10.244.166.130   node1   <none>           <none>
pod2     1/1     Running   0          12s     10.244.166.131   node1   <none>           <none>

2、node亲和性

  1. 根据node上面的标签进行调度

  2. 根据的是node和pod之间的关系进行调度的

1、软亲和性

  1. 如果没有符合条件的,就随机选择一个进行调度
[root@master pod]# cat pod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod4
  namespace: test
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:   #匹配节点上面的标签
          - key: app
            operator: In
            values: ["node1"]
        weight: 1   #根据权重来调度
  containers:
  - name: pod4
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

[root@master pod]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod3   1/1     Running   0          6m52s   10.244.166.133   node1   <none>           <none>
pod4   1/1     Running   0          40s     10.244.166.135   node1   <none>           <none>

2、硬亲和性

[root@master pod]# cat pod3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod3
  namespace: test
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  #硬限制
        nodeSelectorTerms:   #根据这个node上面的标签来进行调度
        - matchExpressions:
          - key: app
            operator: In
            values: ["node1"]   #调度到上面有app=node1这个标签的节点上面去
  containers:
  - name: pod3
    image: docker.io/library/nginx:1.9.1
    imagePullPolicy: IfNotPresent

3、pod亲和性

  1. 就是几个pod之间有依赖的关系,就放在一起,这样效率就快一点,网站服务和数据库服务就需要在一起,提高效率

  2. 根据正在运行的pod上面的标签进行调度

1、软亲和性

apiVersion: v1
kind: Pod
metadata:
  name: pod7
  namespace: test
spec:
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values: ["pod4"]
          topologyKey: app
        weight: 1
  containers:
  - name: pod7
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

[root@master pod]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod4   1/1     Running   0          24m   10.244.166.136   node1   <none>           <none>
pod5   1/1     Running   0          21m   10.244.166.137   node1   <none>           <none>
pod7   1/1     Running   0          51s   10.244.166.139   node1   <none>           <none>

2、硬亲和性

[root@master pod]# cat pod5.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod5
  namespace: test
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["pod4"]
        topologyKey: kubernetes.io/hostname   #这个就是拓扑域,每个节点的这个都不一样。node1,node2等
  containers:
  - name: pod5
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

#关于这个topologyKey的值的选择,一般就是节点上面的标签
apiVersion: v1
kind: Pod
metadata:
  name: pod6
  namespace: test
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["pod4"]
        topologyKey: app2  #这个是node2上面的标签,调度到pod包含这个app=pod4这个标签,并且节点是标签是app2上面的节点上面
  containers:
  - name: pod6
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

[root@master pod]# cat pod5.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod6
  namespace: test
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["pod4"]
        topologyKey: app  #调度到pod包含了app的标签,并且值在app节点上面去了
  containers:
  - name: pod6
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

# operator: DoesNotExist情况
apiVersion: v1
kind: Pod
metadata:
  name: pod6
  namespace: test
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: DoesNotExist 
        topologyKey: app   #调度到key不包含app并且节点标签为app的节点上面,还是调度到app节点上面去了
  containers:
  - name: pod6
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent
 

4、pod反亲和性

就是当2个都是占内存比较高的Pod,就使用和这个反亲和性进行分开

apiVersion: v1
kind: Pod
metadata:
  name: pod8
  namespace: test
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values: ["pod4"]
        topologyKey: kubernetes.io/hostname   #调度到不能包含app=pod4上面的节点,调度到node1上
  containers:
  - name: pod8
    image: docker.io/library/nginx
    imagePullPolicy: IfNotPresent

[root@master pod]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod4   1/1     Running   0          36m     10.244.166.136   node1   <none>           <none>
pod5   1/1     Running   0          33m     10.244.166.137   node1   <none>           <none>
pod6   1/1     Running   0          7m42s   10.244.166.140   node1   <none>           <none>
pod7   1/1     Running   0          12m     10.244.166.139   node1   <none>           <none>
pod8   1/1     Running   0          8s      10.244.104.6     node2   <none>           <none>=

5、污点

  1. 在node上面进行打污点

  2. kubectl explain node.spec.taints

  3. 手动打污点, kubectl taint nodes node1 a=b:NoSchedule

  4. 污点三个等级

    1. NoExecute 节点上面的pod都移除掉,不能调度到这个节点上

    2. NoSchedule 节点上面存在的pod保留,但是新创建的pod不能调度到这个节点上面

    3. PreferNoSchedule pod不到万不得已的情况下,才能调度到这个节点上面

#给node1打上一个污点
[root@master pod]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
pod4   1/1     Running   0          41m     10.244.166.136   node1   <none>           <none>
pod5   1/1     Running   0          37m     10.244.166.137   node1   <none>           <none>
pod6   1/1     Running   0          12m     10.244.166.140   node1   <none>           <none>
pod7   1/1     Running   0          17m     10.244.166.139   node1   <none>           <none>
pod8   1/1     Running   0          4m33s   10.244.104.6     node2   <none>           <none>

[root@master pod]# kubectl taint node node1 app=node1:NoExecute
node/node1 tainted
#发现这个节点上面的pod都销毁了
[root@master pod]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
pod8   1/1     Running   0          6m21s   10.244.104.6   node2   <none>           <none>

#去除污点
[root@master pod]# kubectl taint node node1 app-
node/node1 untainted
[root@master pod]# kubectl describe node node1 | grep -i taint
Taints:             <none>

6、容忍度

  1. 在pod上面进行容忍度,就是会容忍node上面的污点,从而能进行调度

  2. kubectl explain pod.spec.tolerations

#就是节点上面有污点但是pod上面有容忍度可以容忍这个污点来进行调度到指定的节点上面去
#给node1打上污点
[root@master pod]# kubectl taint node node1 app=node1:NoExecute
node/node1 tainted

#进行调度到node1上
apiVersion: v1
kind: Pod
metadata:
  name: pod10
  namespace: test
spec:
  tolerations:
  - key: "app"
    operator: Equal  #就是key和values,effect必须和node上面完全匹配才行 #exists,只要对应的键是存在的,其值被自动定义成通配符
    value: "node1"
    effect: NoExecute
  containers:
  - name: pod10
    image: docker.io/library/nginx:1.9.1  

[root@master pod]# kubectl get pod -n test -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
pod10   1/1     Running   0          58s   10.244.166.142   node1   <none>           <none>
pod8    1/1     Running   0          27m   10.244.104.6     node2   <none>           <none>


apiVersion: v1
kind: Pod
metadata:
  name: pod11
  namespace: test
spec:
  tolerations:
  - key: "app"
    operator: Exists   #容忍无论app,NoExecute的值为多少,都能进行调度
    value: ""
    effect: NoExecute
  containers:
  - name: pod11
    image: docker.io/library/nginx:1.9.1  

四:pod的生命周期

img

  1. init容器,初始化的容器,就是必须要经过这个阶段才能运行主容器

  2. 主容器,里面有启动前钩子和启动后钩子

1、初始化容器

[root@master pod]# cat init.yaml 
apiVersion: v1
kind: Pod
metadata:
  name:  init-pod
  namespace: test
spec:
  initContainers:
  - name: init-pod1
    image: docker.io/library/nginx:1.9.1 
    command: ["/bin/bash","-c","touch /11.txt"]
  containers:
  - name: main-pod
    image: docker.io/library/nginx:1.9.1 

[root@master pod]# kubectl get pod -n test -w
NAME       READY   STATUS    RESTARTS   AGE
init-pod   0/1     Pending   0          0s
init-pod   0/1     Pending   0          0s
init-pod   0/1     Init:0/1   0          0s
init-pod   0/1     Init:0/1   0          1s
init-pod   0/1     PodInitializing   0          2s
init-pod   1/1     Running           0          3s

#如果初始化错误的话,会一直陷入重启的状态,这个跟pod的重启策略有关
[root@master pod]# cat init.yaml 
apiVersion: v1
kind: Pod
metadata:
  name:  init-pod
  namespace: test
spec:
  initContainers:
  - name: init-pod1
    image: docker.io/library/nginx:1.9.1 
    command: ["/bin/bash","-c","qwe /11.txt"]
  containers:
  - name: main-pod
    image: docker.io/library/nginx:1.9.1 

[root@master pod]# kubectl get pod -n test -w
NAME       READY   STATUS    RESTARTS   AGE
init-pod   0/1     Pending   0          0s
init-pod   0/1     Pending   0          0s
init-pod   0/1     Init:0/1   0          0s
init-pod   0/1     Init:0/1   0          0s
init-pod   0/1     Init:0/1   0          1s
init-pod   0/1     Init:Error   0          2s
init-pod   0/1     Init:Error   1 (2s ago)   3s
init-pod   0/1     Init:CrashLoopBackOff   1 (2s ago)   4s
init-pod   0/1     Init:Error              2 (14s ago)   16s

2、启动前钩子

  1. 就是在主容器运行的前,执行这个钩子

  2. 失败的话,会一直重启(重启策略决定的),就不会运行主容器了

  3. 有三种的写法

1、exec

[root@master pod]# cat pre.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pre-pod
  namespace: test
spec:
  containers:
  - name: pre-pod
    image: docker.io/library/nginx:1.9.1
    lifecycle:
      postStart:
         exec:
           command: ["/bin/bash","-c","touch /11.txt"]

[root@master pod]# kubectl exec -n test -ti pre-pod -- /bin/bash
root@pre-pod:/# ls
11.txt	boot  etc   lib    media  opt	root  sbin  sys  usr
bin	dev   home  lib64  mnt	  proc	run   srv   tmp  var
root@pre-pod:/# cat 11.txt 

#如果启动前钩子钩子报错的话,后面的主容器不会运行了

2、httpget


3、启动后钩子

[root@master pod]# cat pre.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pre-pod
  namespace: test
spec:
  containers:
  - name: pre-pod
    image: docker.io/library/nginx:1.9.1
    lifecycle:
      preStop:
         exec:
           command: ["/bin/bash","-c","touch /11.txt"]

4、pod重启策略和pod的状态

  1. 用于设置pod的值

  2. Always,当容器出现任何状况的话,就自动进行重启,这个是默认的值

  3. OnFailure,当容器终止运行且退出码不为0时,kubelet自动重启该容器

  4. Never,不论容器的状态如何,kubelet都不会重启该容器

  5. pod的状态

    1、pending,请求创建Pod时,条件不满足,调度没有进行完成没有一个节点符合,或者是处于下载镜像的情况

    1. running 就是已经调度到一个节点上面了,里面的容器至少有一个创建出来了

    2. succeeded pod里面的所有容器都成功的被终止了,并且不会在重启了

    3. Failed 里面的所有容器都已经终止了,并且至少有一个容器是因为失败终止的,就是非0状态重启的

    4. Unknown 未知状态,就是apiserver和kubelet出现了问题

    5. Evicted状态,内存和硬盘资源不够

    6. CrashLoopBackOff 容器曾经启动了,但是又异常退出了

    7. Error pod启动过程中发生了错误

    8. Completed 说明pod已经完成了工作,

#在容器里面设置一个启动前钩子,钩子会失败,然后重启策略设置为Never
apiVersion: v1
kind: Pod
metadata:
  name: pre-pod
  namespace: test
spec:
  restartPolicy: Never
  containers:
  - name: pre-pod
    image: docker.io/library/nginx:1.9.1
    lifecycle:
      postStart:
        exec:
          command: ["/bin/bash","-c","qwe /11.txt"]
#这个钩子失败了,然后pod不进行重启策略
[root@master pod]# kubectl get pod -n test -w
NAME      READY   STATUS    RESTARTS   AGE
pre-pod   0/1     Pending   0          0s
pre-pod   0/1     Pending   0          0s
pre-pod   0/1     ContainerCreating   0          0s
pre-pod   0/1     ContainerCreating   0          0s
pre-pod   0/1     Completed           0          2s
pre-pod   0/1     Completed           0          3s
pre-pod   0/1     Completed           0          4s

#查看详细信息
#正常退出了
Events:
  Type     Reason               Age   From               Message
  ----     ------               ----  ----               -------
  Normal   Scheduled            12m   default-scheduler  Successfully assigned test/pre-pod to node1
  Normal   Pulled               12m   kubelet            Container image "docker.io/library/nginx:1.9.1" already present on machine
  Normal   Created              12m   kubelet            Created container pre-pod
  Normal   Started              12m   kubelet            Started container pre-pod
  Warning  FailedPostStartHook  12m   kubelet            PostStartHook failed
  Normal   Killing              12m   kubelet       s     FailedPostStartHook

五、pod健康检查(主要就是容器里面)

1、liveness probe(存活探测)

  1. 用于检测pod内的容器是否处于运行的状态,当这个探测失效时,k8s会根据这个重启策略决定是否重启改容器

  2. 适用于在容器发生故障时进行重启,web程序等

  3. 主要就是检测pod是否运行的





2、readiness probe(就绪性探测)

  1. 就是pod里面的容器运行了,但是提供服务的程序,需要读取这个网页的配置文件,才能提供服务

  2. 所以的话需要这个就绪性探测,服务器起来了,就能提供这个服务了

  3. 防止Pod起来了,但是里面的服务是假的服务这种情况

3、(启动探测)

标签:亲和性,test,Running,master,node1,健康检查,pod,root
From: https://www.cnblogs.com/qw77/p/18249770

相关文章

  • kubectl按pod创建时间排序获取列表 _
    按时间排序,可以更快地找到最近更新的pod基于当前ns1kubectlgetpods--sort-by=.metadata.creationTimestampBASH基于整个集群1kubectlgetpods-A--sort-by=.metadata.creationTimestampBASH也可以按Pod的状态排序,快速找到不正常的Pod1......
  • Podman
    Podmanhttps://podman.io/Thebestfree&opensourcecontainertoolsManagecontainers,pods,andimageswithPodman.SeamlesslyworkwithcontainersandKubernetesfromyourlocalenvironment. advantagehttps://www.redhat.com/zh/topics/container......
  • k8s_示例_根据CPU使用率自动扩展Pod数量并使Pod分布在不同节点
    我们从制作测试用镜像开始,后续一步一步实现在k8s中使pod根据cpu用量自动扩展pod个数。知识准备在做这个示例之前,需要了解k8s(也叫kubernetes)基本原理,了解k8s是用来干嘛的即可,以及deployment、service、hpa、镜像、docker等概念。不然会有些晕的,不知道这些配置和这些操作......
  • `kubectl get pod -oyaml` 和 `kubectl describe pod`
    kubectlgetpod-oyaml和kubectldescribepod这两个命令都用于获取Pod的信息,但它们提供信息的方式和内容有所不同:kubectlgetpod-oyaml:这个命令列出指定Pod的信息,输出格式为YAML。输出内容是结构化的,并且通常是机器可读的。它包括Pod的所有字段和值,如API......
  • k8s里node 宕机后如何提高pod迁移速度
    大概的配置参数:  node故障后,pod会迁移到正常的node上,迁移时间大概8分钟左右,如果是微服务,注册到nacos,服务不受影响,但是对于其他的服务,请求中会有大量失败。 需要几个流程:kubelet自身会定期更新状态到apiserver,通过kubelet的参数node-status-update-frequency配置......
  • k8s——pod控制器
    一、pod控制器定义  Pod控制器,又称之为工作负载(workload),是用于实现管理pod的中间层,确保pod资源符合预期的状态,pod的资源出现故障时,会尝试进行重启,当根据重启策略无效,则会重新新建pod的资源。二、pod控制器类型ReplicaSet:代用户创建指定数量的pod副本,确保pod副本数量符......
  • 修改k8s pod的hosts文件
    当我们服务需要使用自定义的域名解析时,就需要修改pod内hosts文件。而如果我们在pod内部修改后,下次重启依然会丢,所有下面用两种方式实现持久化修改: 1.当集群内所有或者大部分服务都需要修改hosts文件时,我们可以修改CoreDNS的configmap文件 kubectleditcm-nkube-systemco......
  • k8s-pod参数详解
    目录概述创建Pod编写一个简单的Pod添加常用参数为Pod的容器分配资源网络相关Pod健康检查启动探针存活探针就绪探针作用整个Pod参数配置创建docker-registry卷挂载结束概述  k8s中的pod参数详解。官方文档  版本k8s1.27.x、busybox:stable-musl、nginx:sta......
  • k8s配置节点亲和性yaml示例:根据节点名称来配置节点亲和性(node affinity)
    在Kubernetes中,根据节点名称来配置节点亲和性(nodeaffinity)通常不是直接通过节点名称实现的,而是通过为节点添加特定的标签,然后在Pod的亲和性规则中匹配这些标签。不过,有一种特殊情况是使用NodeAffinity的nodeSelectorTerms中的matchExpressions,通过设置operator为In并使用......
  • 15种pod的状态
    15种pod的状态调度失败常见错误状态(Unschedulable)pod被创建后进入调度阶段,k8s调度器依据pod声明的资源请求量和调度规则,为pod挑选一个适合运行的节点。当集群节点不满足pod调度需求时,pod将会处于pending状态。造成调度失败的典型原因有:节点资源不足k8s将节点资源(cpu,内......