守护进程集 DaemonSet
守护进程集也有副本数概念,但是副本数概念并不是通过配置清单的方式人为去定义的,他是靠你当前集群的节点个数,比如我当前集群
之前安装集群安装的网络插件calico
[root@k8s-master1 deployment]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6bd6b69df9-rl2mh 1/1 Running 21 (57m ago) 11d
calico-node-52wx8 1/1 Running 29 (61m ago) 11d
#这个calico 网络插件的 个数是根据你节点个数来创建的,你节点有几个他就有几个
calico-node-kp8cf 1/1 Running 18 (60m ago) 11d
calico-node-svqs8 1/1 Running 18 (60m ago) 11d
calico-node-tbkkf 1/1 Running 24 (57m ago) 11d
calico-node-wx686 1/1 Running 18 (57m ago) 11d
calico-node-x2ldl 1/1 Running 18 (60m ago) 11d
calico-typha-77fc8866f5-2f7k5 1/1 Running 18 (60m ago) 11d
coredns-567c556887-74dh2 1/1 Running 18 (57m ago) 11d
coredns-567c556887-xdlw6 1/1 Running 18 (57m ago) 11d
etcd-k8s-master1 1/1 Running 24 (61m ago) 11d
etcd-k8s-master2.guoguo.com 1/1 Running 18 (57m ago) 11d
etcd-k8s-master3.guoguo.com 1/1 Running 18 (57m ago) 11d
kube-apiserver-k8s-master1 1/1 Running 30 (58m ago) 11d
kube-apiserver-k8s-master2.guoguo.com 1/1 Running 17 (57m ago) 11d
kube-apiserver-k8s-master3.guoguo.com 1/1 Running 18 (57m ago) 11d
kube-controller-manager-k8s-master1 1/1 Running 24 (61m ago) 11d
kube-controller-manager-k8s-master2.guoguo.com 1/1 Running 17 (57m ago) 11d
kube-controller-manager-k8s-master3.guoguo.com 1/1 Running 19 (57m ago) 11d
kube-proxy-888wd 1/1 Running 24 (61m ago) 11d
#proxy也是根据集群节点个数 来创建的
kube-proxy-bv22g 1/1 Running 18 (57m ago) 11d
kube-proxy-dfflv 1/1 Running 18 (57m ago) 11d
kube-proxy-dg8b8 1/1 Running 18 (60m ago) 11d
kube-proxy-pnqr8 1/1 Running 18 (60m ago) 11d
kube-proxy-whq86 1/1 Running 18 (60m ago) 11d
kube-scheduler-k8s-master1 1/1 Running 26 (61m ago) 11d
kube-scheduler-k8s-master2.guoguo.com 1/1 Running 19 (57m ago) 11d
kube-scheduler-k8s-master3.guoguo.com 1/1 Running 18 (57m ago) 11d
metrics-server-684999f4d6-p6tnj 1/1 Running 18 (60m ago) 11d
kube-proxy 和calico就是使用的daemonSet
daemonSet不是人为定义副本数量,而是自动通过apiVersion到etcd里面获取你当前集群的节点个数,在etcd里面记录了多少个node节点,daemonSet就会得到这个数量,如果后续添加新的node节点,daemonSet也会创建好对应的pod,如果node下线了daemonSet也会将对应的pod在etcd里面删掉这个就是daemonSet 守护进程集
这个使用的话 通常是做监控使用的
每个集群上要跑一个监控节点,还有网络节点,每个集群上都要运行一个网络容器,不然网络是不通的,还有前面讲的kube-proxy,kube-proxy和APIServer通信的,这个每个节点都得有.
做日志收集使用
daemonSet没有replicas那个值和其他控制器 rc rs deployment 没有什么区别
写一个
[root@k8s-master1 daemonSet]# cat daemonset-nginx-1.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemoset-1
spec:
selector: #这个pod和那些关联,需要用标签来控制pod 所以他也需要标签选择器
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
restartPolicy: Always
containers:
- name: image-nginx
image: images.guoguo.com/apps/nginx:1.22.1
ports:
- containerPort: 80
protocol: TCP
创建
[root@k8s-master1 daemonSet]# kubectl apply -f daemonset-nginx-1.yml
[root@k8s-master1 daemonSet]# kubectl get pods
NAME READY STATUS RESTARTS AGE
daemoset-1-nwwxj 1/1 Running 0 112s
daemoset-1-qb66h 1/1 Running 0 112s
daemoset-1-sqz8z 1/1 Running 0 112s
#为什么只有三个而不是六个呢,因为当初创建master的时候给master设置了污点,pod创建调度不会去master创建
#我们没有给master定义容忍度,生产上不允许应用程序往master上创建!!!
#除非以后想要监控客户端,那需要在DaemoSet上加上容忍度
回滚
DaemoSet回滚和更新deployment一模一样
标签:ago,11d,k8s,Kubernetes,18,Running,DaemonSet,kube,守护 From: https://blog.51cto.com/u_15971294/7120957