标签:Node,master1,0.0,192.168,255.255,Running,kube,节点 From: https://www.cnblogs.com/xdxx/p/17188661.htmlNode节点
还有其他叫法Woker节点,minion节点
Node节点上主要安装两个软件:
Kubelet:
负责监听借调上pod状态,同时负责上报节点和节点上面的pod状态,负责与Master节点通信,并管理节点上的pod
Kube-proxy:
负责pod直接的通信和负载均衡,将指定流量分发到后端正确的机器上
查看Kube-proxy工作模式查看:curl 127.0.0.1:10249/proxyMode
IPVS:
监听Master节点增加和删除service以及endpoint的信息,调用Netlink接口创建相应的IPVS规则。
通过IPVS规则,将流量转发到响应的pod上
查看ipvs转发表:ipvsadm -ln
Iptables:
监听Master节点增加和删除service以及endpoint的消息,对应每一个service,它都会创建一个iptables规则
将service的clusterIP代理到后端对应的pod
Calico:
符合CNI标准的网络插件,给每个pod生成唯一的IP地址,并且把每个节点当做一个路由器
跨节点pod通信的时候都是可以互通的
[root@master1 ~]# kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7b5bcff94c-v8lz8 1/1 Running 0 3h37m 172.16.159.131 k8s-master1 <none> <none>
calico-node-55txl 1/1 Running 0 124m 192.168.1.230 node3 <none> <none>
calico-node-65j2t 1/1 Running 0 124m 192.168.1.100 node1 <none> <none>
calico-node-87w4z 1/1 Running 0 3h37m 192.168.1.243 k8s-master1 <none> <none>
calico-node-rlfj8 1/1 Running 0 124m 192.168.1.175 node2 <none> <none>
calico-node-sd77l 1/1 Running 0 3h9m 192.168.1.128 master3 <none> <none>
calico-node-zlv45 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
coredns-546565776c-68cfn 1/1 Running 0 4h19m 172.16.159.129 k8s-master1 <none> <none>
coredns-546565776c-hwqbw 1/1 Running 0 4h19m 172.16.159.130 k8s-master1 <none> <none>
etcd-k8s-master1 1/1 Running 0 4h20m 192.168.1.243 k8s-master1 <none> <none>
etcd-master2 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
etcd-master3 1/1 Running 0 3h10m 192.168.1.128 master3 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 4h20m 192.168.1.243 k8s-master1 <none> <none>
kube-apiserver-master2 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
kube-apiserver-master3 1/1 Running 0 3h8m 192.168.1.128 master3 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 1 4h20m 192.168.1.243 k8s-master1 <none> <none>
kube-controller-manager-master2 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
kube-controller-manager-master3 1/1 Running 0 3h9m 192.168.1.128 master3 <none> <none>
kube-proxy-22672 1/1 Running 0 124m 192.168.1.230 node3 <none> <none>
kube-proxy-6sw7r 1/1 Running 0 4h19m 192.168.1.243 k8s-master1 <none> <none>
kube-proxy-crdrf 1/1 Running 0 3h9m 192.168.1.128 master3 <none> <none>
kube-proxy-kvx97 1/1 Running 0 124m 192.168.1.175 node2 <none> <none>
kube-proxy-l2ntr 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
kube-proxy-wxqsk 1/1 Running 0 124m 192.168.1.100 node1 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 1 4h20m 192.168.1.243 k8s-master1 <none> <none>
kube-scheduler-master2 1/1 Running 0 3h10m 192.168.1.189 master2 <none> <none>
kube-scheduler-master3 1/1 Running 0 3h8m 192.168.1.128 master3 <none> <none>
[root@master1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp0s3
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 enp0s3
172.16.104.0 192.168.1.175 255.255.255.192 UG 0 0 0 tunl0
172.16.135.0 192.168.1.230 255.255.255.192 UG 0 0 0 tunl0
172.16.136.0 192.168.1.128 255.255.255.192 UG 0 0 0 tunl0
172.16.159.128 0.0.0.0 255.255.255.192 U 0 0 0 *
172.16.159.129 0.0.0.0 255.255.255.255 UH 0 0 0 cali2f7160f9e3b
172.16.159.130 0.0.0.0 255.255.255.255 UH 0 0 0 calid636110dee6
172.16.159.131 0.0.0.0 255.255.255.255 UH 0 0 0 cali7447d7d4a6b
172.16.166.128 192.168.1.100 255.255.255.192 UG 0 0 0 tunl0
172.16.180.0 192.168.1.189 255.255.255.192 UG 0 0 0 tunl0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
CoreDNS:
用于kubernetes集群内部的service解析,可以让pod把service名称解释程IP地址
通过service的IP地址去连接对应的应用上