首页 > 其他分享 >k8s节点变配pod驱逐操作

k8s节点变配pod驱逐操作

时间:2023-04-07 18:02:05浏览次数:55  
标签:node kube 80.2 变配 system agent 172.24 pod k8s

说明:日常运维中或多或少遇到k8s节点调整配置,或者k8s集群中某节点有问题,需要下架操作。

以k8s集群中节点172.24.80.2节点需要扩容为例,共三步:

#暂停节点172.24.80.2调度,使节点172.24.80.2不可用,使节点不接收新的pod
kubectl cordon 172.24.80.2

#驱逐节点上运行的pod到其他节点,
kubectl drain --ignore-daemonsets --delete-emptydir-data 172.24.80.2

注释:--delete-emptydir-data 删除emptyDir数据
--ignore-daemonsets 忽略DeamonSet,否则DeamonSet被删除后,仍会自动重建


#节点变更完成后,恢复172.24.80.2可调度
kubectl uncordon 172.24.80.2

下面是报错操作过程

[root@cvm-01 ~]# kubectl cordon 172.24.80.2
node/172.24.80.2 cordoned

没有使用--ignore-daemonsets和--delete-emptydir-data参数报错,提示让加上该参数
[root@cvm-01 ~]# kubectl drain 172.24.80.2
node/172.24.80.2 already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "172.24.80.2", aborting command...

There are pending nodes to be drained:
 172.24.80.2
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/csi-cbs-node-8r7nw, kube-system/ip-masq-agent-kvbcs, kube-system/kube-proxy-7xzt4, kube-system/node-problem-detector-542vq, kube-system/oom-guard-8wf28, kube-system/serf-agent-rh5sb, kube-system/tke-bridge-agent-tkz8f, kube-system/tke-cni-agent-xcj4g, kube-system/tke-log-agent-8qq72, kube-system/tke-monitor-agent-bfg9n, kube-system/tke-node-exporter-pw2hk, monitoring/node-exporter-pz8wp
cannot delete Pods with local storage (use --delete-emptydir-data to override): monitoring/alertmanager-main-1, monitoring/prometheus-k8s-1

只添加--delete-emptydir-data参数一样报错,缺--ignore-daemonsets参数。

[root@cvm-01 ~]# kubectl drain --delete-emptydir-data 172.24.80.2
node/172.24.80.2 already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "172.24.80.2", aborting command...

There are pending nodes to be drained:
 172.24.80.2
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/csi-cbs-node-8r7nw, kube-system/ip-masq-agent-kvbcs, kube-system/kube-proxy-7xzt4, kube-system/node-problem-detector-542vq, kube-system/oom-guard-8wf28, kube-system/serf-agent-rh5sb, kube-system/tke-bridge-agent-tkz8f, kube-system/tke-cni-agent-xcj4g, kube-system/tke-log-agent-8qq72, kube-system/tke-monitor-agent-bfg9n, kube-system/tke-node-exporter-pw2hk, monitoring/node-exporter-pz8wp

正确执行pod驱逐
[root@cvm-01 ~]# kubectl drain --ignore-daemonsets --delete-emptydir-data 172.24.80.2
node/172.24.80.2 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/csi-cbs-node-8r7nw, kube-system/ip-masq-agent-kvbcs, kube-system/kube-proxy-7xzt4, kube-system/node-problem-detector-542vq, kube-system/oom-guard-8wf28, kube-system/serf-agent-rh5sb, kube-system/tke-bridge-agent-tkz8f, kube-system/tke-cni-agent-xcj4g, kube-system/tke-log-agent-8qq72, kube-system/tke-monitor-agent-bfg9n, kube-system/tke-node-exporter-pw2hk, monitoring/node-exporter-pz8wp
evicting pod monitoring/prometheus-k8s-1
evicting pod easypie-pre/pre-easypie-nacos-1
evicting pod kube-system/coredns-78964c5667-f76ds
evicting pod kube-system/serf-holder-7869fcfdf5-6nzkb
evicting pod monitoring/alertmanager-main-1
pod/coredns-78964c5667-f76ds evicted
pod/serf-holder-7869fcfdf5-6nzkb evicted
pod/prometheus-k8s-1 evicted
pod/alertmanager-main-1 evicted
pod/pre-easypie-nacos-1 evicted
node/172.24.80.2 evicted

执行过程截图

k8s节点变配pod驱逐操作_k8s节点变配pod驱逐操作


标签:node,kube,80.2,变配,system,agent,172.24,pod,k8s
From: https://blog.51cto.com/215687833/6176463

相关文章

  • k8s前端部署
    //前端项目打包构建;支持多环境pipeline{agentanyenvironment{//GIT路径GIT_PATH="threegene/dev/zproduct/server/demo.git"//项目名称,使用Job名称作为项目名称PROJECT_NAME="${JOB_NAME}"//自定义项目名称//PROJECT_NAME="threegene-livex-center-html"//......
  • 基于cpu和内存进行pod扩容,创建hpa
    基于cpu和内存进行pod扩容,创建hpa创建镜像mkdirphpcdphptouchdockerfiletouchindex.phpvimdockerfileFROMphp:5-apacheADDindex.php/var/www/html/index.phpRUNchmoda+rxindex.phpvimindex.php<?php$x=0.0001;for($i=0;$i<=1000000;$i++){......
  • K8S 1.24.1 helm 部署 kafka 和 kafka-console-ui
    背景IP角色中间件172.16.16.108k8s-master-1kafka,zookeeper172.16.16.109k8s-node-1kafka,zookeeper172.16.16.110k8s-node-2kafka,zookeeper部署kafkamkdir-p/data/yaml/klvchen/kafka&&cd/data/yaml/klvchen/kafka#添加bitnamichar......
  • 本地k8s搭建
    记录一下在本地电脑上基于Ubuntu20.04虚拟机搭建K8s集群下载Ubuntu20.04LTS镜像使用清华大学源下载https://mirrors.tuna.tsinghua.edu.cn/ubuntu-releases/20.04/ubuntu-20.04.6-live-server-amd64.iso.torrent创建Master使用OracleVMVirtualBox创建虚拟机配置:2核4G(至......
  • K8S Metrics Server安装
     kubectlapply-f metrics-server-components.yaml apiVersion:v1kind:ServiceAccountmetadata:labels:k8s-app:metrics-servername:metrics-servernamespace:kube-system---apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRolemetada......
  • K8S基础环境部署0
    基础环境部署1)前期准备(所有节点)1、修改主机名和配置hosts先部署1master和2node节点,后面再加一个master节点#在172.17.35.60执行hostnamectlset-hostnamek8s-m60#在172.17.35.62执行hostnamectlset-hostnamek8s-n62#在172.17.35.63执行hostnamectlset-ho......
  • k8s前端js、css等资源文件CDN加速
     序言:用户访问网站,网站的加载速度直接影响着用户体验问题;前端js、css等文件资源需要加速访问处理;  方案一:js、css、png等资源打包到文件服务器,文件服务器抛出连接,工程项目打包的时候publicpath使用文件服务器访问链接;如图:需要购买文件服务器,把资源文件推送到文件服务......
  • K8S 高可用外部 etcd 运行时 (三) 使用Flannel网络
    kubectlapply-fkube-flannel.yml---kind:NamespaceapiVersion:v1metadata:name:kube-flannellabels:k8s-app:flannelpod-security.kubernetes.io/enforce:privileged---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1metadata:......
  • K8S 高可用外部 etcd , Docker 容器运行时 (三) 加入K8S集群
    control-plane上执行1、#证如果过期了,可以使用下面命令生成新证书上传,这里会打印出certificatekey,后面会用到kubeadminitphaseupload-certs--upload-certs#你还可以在【init】期间指定自定义的--certificate-key,以后可以由join使用。要生成这样的密钥,可以使用以下......
  • 50、K8S-自定义资源定义-CustomResourceDefinition
    Kubernetes学习目录1、基础知识1.1、回顾到目前位置,我们为了在k8s上能够正常的运行我们所需要的服务,需要遵循以下方式来创建相关资源:1、合理的分析业务需求。2、梳理业务需求的相关功能。3、定制不同功能的资源配置文件。4、应用资源配置文件,完善业务环境。1.2、需求......