首页 > 其他分享 >K8S网络通信

K8S网络通信

时间:2023-01-02 12:00:10浏览次数:47  
标签:网络通信 master1 hybridnet root Running myserver K8S underlay

33.K8S网络通信

33.1 网络二层通信

基于目标mac地址通信,不可跨局域网通信,通常是由交换机实现报文转发

33.2 网络通信-Vlan

  • Vlan
VLAN虚拟局域网,是将一个物理交换机的网络在逻辑上划分成多个广播域的通信技术,VLAN内的主机间可以直接通信,而LVAN网络外的主机需要通过三层网络设备转发才可以通信,因此一个VLAN可以将服务器的广播报文限制在一个VLAN内,从而降低单个网络环境中的广播报文,VLAN采用12位标识VLAN ID,即一个交换机设备最多为2^12=4096个VLAN

33.3 网络三层通信

33.4 网络通信-overlay简介

  • VxLAN
#L2 2层协议 数据链路层
#L3 3层协议 网络层
#L4 4层协议 传输层
VxLAN:VxLAN全称叫虚拟扩展本地局域网,主要有Cisco推出,Vxlan是一个VLAN的扩展协议,是由IETF定义的NVO3标准技术之一,VxLAN的特点是将L2的以太帧封装到UDP报文,并在L3网络中传输,即使用MAC in UDPd 方法对报文进行重新封装,VxLAN本质上是一种overlay的隧道封装技术,它将L2的以太网帧封装成L4的UDP数据报,然后再L3的网络中传输,效果就像L2的以太网帧在一个广播域中传输一样,实际上L2的以太网帧跨越了L3网络传输,但是缺不受L3网络的限制,vxlan采用24位标识vlan ID号,因此可以支持2^24=16777216个vlan,其可扩展性比vlan强大的多,可以支持大规模数据中心的网络需求
  • VTEP
IVTEP(VXLAN Tunnel Endpoint vxlan隧道端点),VTEP是VXLAN网络的边缘设备,是VXLAN隧道的起点和终点,VXLAN对用户原始数据帧的封装和解封装均在VTEP上进行,用于VXLAN报文的封装和解封装,VTEP与物理网络相连,分配的地址为物理网IP地址,VXLAN报文中源IP地址为本节点的VTEP地址,VXLAN报文中目的IP地址为对端节点的VTEP地址;一对VTEP地址就对应着一个VXLAN隧道,服务器上的虚拟交换机(隧道flannel.1就是VTEP),比如一个虚拟机网络中的多个vxlan就需要多个VTEP对不同网络的报文进行封装与解封装。
  • VNI
VNI (VXLAN Network ldentifier): VXLAN网络标识VNI类似VLAN ID,用于区分VXLAN段,不同
VXLAN段的虚拟机不能直接二层相互通信,一个VNI表示一个租户,即使多个终端用户属于同一个VNl,也表示一个租户。
  • NVGRE
NVGRE: Network Virtualization using Generic Routing Encapsulation,主要支持者是Microsoft
与VXLAN不同的是,NVGRE没有采用标准传输协议(TCP/UDP),而是借助通用路由封装协议(GRE), NVGRE使用GRE头部的低24位作为租户网络标识符(TNI),与VXLAN一样可以支持1777216个vlan。

33.5 网络通信-Overlay简介

  1. 叠加网络/覆盖网络,在物理网络的基础之上叠加实现新的虚拟网络,即可使网络的中的容器可以相互通信
  2. 优点是对物理网络的兼容性比较好,可以实现pod的挂主机子网通信
  3. calico与flannel等网络插件都支持overlay网络
  4. 缺点是有额外的封装与解封性能开销
  5. 目前私有云使用比较多

33.6 网络通信-Underlay

  • Underlay
Underlay网络就是传统IT基础设施网络,由交换机和路由器等设备组成,借助以太网协议、路由协议和VLAN协议等驱动,它还是Overlay网络的底层网络,为Overlay网络提供数据通信服务。容器网络中的Underlay网络是指借助驱动程序将宿主机的底层网络接口直接暴露给容器使用的一种网络构建技术,较为常见的解决方案有MAC VLAN、IPVLAN和直接路由等。
  • Underlay依赖于网络网络进行跨主机通信

33.7 网络通信-Underlay实现模式

  • MAC vlan模式:

    • MAC vlan:支持在同一个以太网接口上虚拟出多个网络接口(子接口),每个虚拟接口都拥有唯一的MAC地址并可配置网卡子接口IP
  • IP VLAN模式:

    • IP VLAN类似于MAC VLAN,它同样创建新的虚拟网络接口并为每个接口分配唯一的IP地址,不同之处在于,每个虚拟接口将共享使用物理接口的MAC地址
  • Private(私有)模式:

    • 在Private模式下,同一个宿主机下的容器不能通信,即使通过交换机再把数据报文转发回来也不行.
  • VEPA模式:

    • 虚拟以太端口汇聚器(Virtual Ethernet Port Aggregator,简称VEPA),在这种模式下,macvlan内的容器不能直接接收在同一个物理网卡的容器的请求数据包,但是可以经过交换机的(端口回流)再转发回来可以实现通信。
  • passthru(直通)模式:

    • Passthru模式下该macvlan只能创建一个容器,当运行一个容器后再创建其他容器则会报错。
  • bridge模式:

    • 在bridge这种模式下,使用同一个宿主机网络的macvlan容器可以直接实现通信,推荐使用此模式.

33.8 网络通信-总结

云主机的情况下用underlay 反之用overlay

  • Overlay:基于VXLAN、NVGRE等封装技术实现overlay叠加网络。
  • Macvlan:基于Docker宿主机物理网卡的不同子接口实现多个虚拟vlan,一个子接口就是一个虚拟vlan,容器通过宿主机的路由功能和外网保持通信。

33.9 kubernetes 网络通信模式

  • Overlay:是虚拟出来的子网
    • Flannel Vxlan、Calico BGP、 Calico Vxlan
    • 将pod地址信息封装在宿主机地址信息以内,实现跨主机且可跨node子网的通信报文
  • 直接路由:是需要网络下一条地址能够访问才可以
    • Flannel Host-gw、Flannel VXLAN Directrouting、Calico Qirectrouting
    • 基于主机路由,实现报文从源主机到目的主机的直接转发,不需要进行报文的叠加封装,性能比overlay更好
  • Underlay:需要一个大的子网容器的IP与宿主机IP是可以直接通信的
    • 需要为pod启用单独的虚拟机网络,而是直接使用宿主机物理网络,pod甚至可以在k8s环境之外的节点直接访问(与node节点的网络被打通),相当于把pod当桥接模式的虚拟机使用,比较方便k8s环境以外的访问访问k8s环境中的pod中的服务,而且由于主机使用的宿主机网络,其性能最好

33.10 overlay与underlay集群通信部署

33.10.1 K8s环境准备

#安装docker 集群都做这些环境
root@master1:~# apt update
#做主机名映射
root@master1:~# cat /etc/hosts
10.0.0.201 master1
10.0.0.202 node1
10.0.0.203 node2
10.0.0.204 node3
#安装一些工具
root@master1:~# apt -y install apt-transport-https ca-certificates curl software-properties-common
#安装GPG证书
root@master1:~# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
#写入软件源信息
root@master1:~# sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#安装完在更新软件源
root@master1:~# apt update
#查看可安装的Docker版本
root@master1:~# apt-cache madison docker-ce docker-ce-cli
root@master1:~# apt install -y docker-ce docker-ce-cli
root@master1:~# systemctl start docker && systemctl enable docker
#参数优化配置镜像加速并使用systemd
root@master1:~# sudo tee /etc/docker/daemon.conf<<-'EOF'
{
"exec-opts": [ "native.cgroupdriver=systemd"],
"registry-mirrors " : [ https://9916w1ow.mirror.aliyuncs.com"]
}
EOF
#加载启动服务
sudo systemctl daemon-reload && sudo systemctl restart docker
#关闭swapp
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

33.10.2 安装cri-dockerd

root@master1:~# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.5/cri-dockerd-0.2.5.amd64.tgz
root@master1:~# tar xf cri-dockerd-0.2.5.amd64.tgz
root@master1:~# cp cri-dockerd/cri-dockerd /usr/local/bin/
scp cri-dockerd/cri-dockerd [email protected]:/usr/local/bin/
scp cri-dockerd/cri-dockerd [email protected]:/usr/local/bin/
scp cri-dockerd/cri-dockerd [email protected]:/usr/local/bin/
#配置cri-dockerd服务启动文件 所有节点
cat > /lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com

After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF

root@master1:~# systemctl enable --now cri-docker cri-docker.socket

33.10.3 所有节点安装kubelet kubeadm kubectl 配置阿里云镜像的kubernetes源

  • 安装kubeadm大于1.24.0
#使用阿里或清华大学的kubernetes镜像源
root@master1:~# apt-get update && apt-get install -y apt-transport-https
root@master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@master1:~# apt-get update
#查看可安装版本
root@master1:~# apt-cache madison kubeadm
root@master1:~# apt-get install -y kubelet=1.24.4-00 kubeadm=1.24.4-00 kubectl=1.24.4-00
#验证版本
root@master1:~# kubeadm version
  • 初始化集群-镜像准备
#在master节点提前下载好需要的镜像
root@master1:~# kubeadm config images list --kubernetes-version v1.24.4
k8s.gcr.io/kube-apiserver:v1.24.4
k8s.gcr.io/kube-controller-manager:v1.24.4
k8s.gcr.io/kube-scheduler:v1.24.4
k8s.gcr.io/kube-proxy:v1.24.4
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6
root@master1:~# cat images-download.sh 
#!/bin/bash

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6

33.10.4 初始化集群-初始化kubernetes

场景1: pod可以选择overlay或者underlay, SVC使用overlay, 如果是underlay需要配置SVC使用宿主机的子网
比如以下场景是overlay网络、 后期会用于overlay场景的pod, service会用于overlay的svc场景

kubeadm init --apiserver-advertise-address=10.0.0.201 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.24.4 \
--pod-network-cidr=10.200.0.0/16 \
--service-cidr=10.100.0.0/16 \
--service-dns-domain=cluster.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--cri-socket unix:///var/run/cri-dockerd.sock

场景2: pod可以选择overlay或者underlay,SVC使用underlay初始化,--pod-network-cidr=10.200.0/16会用于后期overay的场景,underlay的网络CIDR后期单独指定,overlay会与underlay并存,--service-cidr=10.0.1.0/24用于后期的underlay svc,通过SVC可以直接访问pod

kubeadm init --apiserver-advertise-address=10.0.0.201 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.24.4 \
--pod-network-cidr=10.200.0.0/16 \
--service-cidr=10.0.1.200/24 \
--service-dns-domain=cluster.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--cri-socket unix:///var/run/cri-dockerd.sock
#注意﹔后期如果要访问SVC则需要在网络设备配置静态路由,因为SVC是iptables或者IPVS规则,不会进行arp报文广播;

场景1-初始化

kubeadm init --apiserver-advertise-address=10.0.0.201 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.24.4 \
--pod-network-cidr=10.200.0.0/16 \
--service-cidr=10.100.0.0/16 \
--service-dns-domain=cluster.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--cri-socket unix:///var/run/cri-dockerd.sock

#执行
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
#加入集群的token 加上cri-dockerd地址
kubeadm join 10.0.0.201:6443 --token c0yx59.iknv4ntorqfjyctq \
	--discovery-token-ca-cert-hash sha256:3bb9c35d92ada8bb0e12b301bb42acec9a7d7f9c519267608439c87f28fd04a7 --cri-socket unix:///var/run/cri-dockerd.sock

root@master1:~# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master1   NotReady   control-plane   3m3s   v1.24.4
node1     NotReady   <none>          18s    v1.24.4
node2     NotReady   <none>          0s     v1.24.4

33.11 安装ubderlay网络组件

33.11.1 helm环境准备

root@master1:~# wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
root@master1:~# tar xvf helm-v3.9.0-linux-amd64.tar.gz
root@master1:~# mv linux-amd64/helm /usr/local/bin/
root@master1:~# helm version
version.BuildInfo{Version:"v3.9.0", GitCommit:"7ceeda6c585217a19a1131663d8cd1f7d641b2a7", GitTreeState:"clean", GoVersion:"go1.17.5"}

33.11.2 部署hybridnet

#官网地址:
https://helm.sh/docs/intro/install/
https://github.com/helm/helm/releases
#如果拉取不下来就手动拉取吧
docker pull docker.io/hybridnetdev/hybridnet:v0.7.6

root@master1:~# helm repo add hybridnet https://alibaba.github.io/hybridnet/
"hybridnet" has been added to your repositories
root@master1:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "hybridnet" chart repository
Update Complete. ⎈Happy Helming!⎈
root@master1:~# helm install hybridnet hybridnet/hybridnet -n kube-system --set init.cidr=10.200.0.0/16 #配置overlay pod网络, 如果不指定--set init.cidr=10.200.0.0/16默认会使用100.64.0.0/16
#执行完查看
root@master1:~# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   57m   v1.24.4
node1     Ready    <none>          54m   v1.24.4
node2     Ready    <none>          54m   v1.24.4
node3     Ready    <none>          54m   v1.24.4

此时hybridnet-manager、hybridnet-webhook pod Pending,通过describe查看发现集群没有节点打上master标签

root@master1:~# kubectl get pod -n kube-system
NAME                                READY   STATUS              RESTARTS      AGE
calico-typha-b4fd58bf4-7s8g5        1/1     Running             0             32m
calico-typha-b4fd58bf4-dsptx        1/1     Running             0             32m
calico-typha-b4fd58bf4-rr5zb        1/1     Running             0             32m
coredns-7f74c56694-dz79n            0/1     ContainerCreating   0             55m
coredns-7f74c56694-gn5zv            0/1     ContainerCreating   0             55m
etcd-master1                        1/1     Running             0             55m
hybridnet-daemon-9qzk2              2/2     Running             0             32m
hybridnet-daemon-fqm2n              2/2     Running             1 (50s ago)   32m
hybridnet-daemon-qm97c              2/2     Running             0             32m
hybridnet-daemon-th2wn              2/2     Running             2 (57s ago)   32m
hybridnet-manager-8bcf9d978-fvqmg   0/1     Pending             0             32m
hybridnet-manager-8bcf9d978-lw4lz   0/1     Pending             0             32m
hybridnet-manager-8bcf9d978-vs5zk   0/1     Pending             0             32m
hybridnet-webhook-66899f995-7svhk   0/1     Pending             0             32m
hybridnet-webhook-66899f995-lxms9   0/1     Pending             0             32m
hybridnet-webhook-66899f995-wjshx   0/1     Pending             0             32m
kube-apiserver-master1              1/1     Running             0             55m
kube-controller-manager-master1     1/1     Running             0             55m
kube-proxy-5hkvp                    1/1     Running             0             52m
kube-proxy-6lphj                    1/1     Running             0             52m
kube-proxy-dhmt4                    1/1     Running             0             53m
kube-proxy-rws5q                    1/1     Running             0             55m
kube-scheduler-master1              1/1     Running             0             55m

kubectl label node master1 node-role.kubernetes.io/master=
kubectl label node node1 node-role.kubernetes.io/master=
kubectl label node node2 node-role.kubernetes.io/master=
kubectl label node node3 node-role.kubernetes.io/master=

#然后就可以了
root@master1:~# kubectl get pod -n kube-system
NAME                                READY   STATUS    RESTARTS        AGE
calico-typha-b4fd58bf4-7s8g5        1/1     Running   0               35m
calico-typha-b4fd58bf4-dsptx        1/1     Running   0               35m
calico-typha-b4fd58bf4-rr5zb        1/1     Running   0               35m
coredns-7f74c56694-dz79n            1/1     Running   0               58m
coredns-7f74c56694-gn5zv            1/1     Running   0               58m
etcd-master1                        1/1     Running   0               59m
hybridnet-daemon-9qzk2              2/2     Running   1 (3m4s ago)    35m
hybridnet-daemon-fqm2n              2/2     Running   1 (4m5s ago)    35m
hybridnet-daemon-qm97c              2/2     Running   1 (3m8s ago)    35m
hybridnet-daemon-th2wn              2/2     Running   2 (4m12s ago)   35m
hybridnet-manager-8bcf9d978-fvqmg   1/1     Running   0               35m
hybridnet-manager-8bcf9d978-lw4lz   1/1     Running   0               35m
hybridnet-manager-8bcf9d978-vs5zk   1/1     Running   0               35m
hybridnet-webhook-66899f995-7svhk   1/1     Running   0               35m
hybridnet-webhook-66899f995-lxms9   1/1     Running   0               35m
hybridnet-webhook-66899f995-wjshx   1/1     Running   0               35m
kube-apiserver-master1              1/1     Running   0               59m
kube-controller-manager-master1     1/1     Running   0               59m
kube-proxy-5hkvp                    1/1     Running   0               56m
kube-proxy-6lphj                    1/1     Running   0               56m
kube-proxy-dhmt4                    1/1     Running   0               56m
kube-proxy-rws5q                    1/1     Running   0               58m
kube-scheduler-master1              1/1     Running   0               59m

33.11.3 验证underlay

  • 创建underlay网络与node节点关联
root@master1:~# mkdir /root/hybridnet
root@master1:~# cd /root/hybridnet
root@master1:~/hybridnet/2.underlay-cases-files# cat 1.create-underlay-network.yaml 
---
apiVersion: networking.alibaba.com/v1
kind: Network
metadata:
  name: underlay-network1
spec:
  netID: 0
  type: Underlay
  nodeSelector:
    network: "underlay-nethost"

---
apiVersion: networking.alibaba.com/v1
kind: Subnet
metadata:
  name: underlay-network1 
spec:
  network: underlay-network1
  netID: 0
  range:
    version: "4"            # IPV4
    cidr: "10.0.0.0/24"     # 宿主机集群地址段
    gateway: "10.0.0.2"     # 外部网关地址
    start: "10.0.0.5"       # 起始IP
    end: "10.0.0.254"       # 结束IP

root@master1:~/hybridnet# kubectl label node master1 network=underlay-nethost
root@master1:~/hybridnet# kubectl label node node1 network=underlay-nethost
root@master1:~/hybridnet# kubectl label node node2 network=underlay-nethost
root@master1:~/hybridnet# kubectl label node node3 network=underlay-nethost
root@master1:~/hybridnet/2.underlay-cases-files# kubectl apply -f 1.create-underlay-network.yaml
#查看网络
V4TOTAL 这个代表的是地址池
V4AVAILABLE 这个代表的是可用的IP个数
root@master1:~/hybridnet/2.underlay-cases-files# kubectl get network
NAME                NETID   TYPE       MODE   V4TOTAL   V4USED   V4AVAILABLE   LASTALLOCATEDV4SUBNET   V6TOTAL   V6USED   V6AVAILABLE   LASTALLOCATEDV6SUBNET
init                4       Overlay           65534     2        65532         init                    0         0        0             
underlay-network1   0       Underlay          250       0        250           underlay-network1       0         0        0             
  • 创建后node节点的变化
root@master1:~/hybridnet/2.underlay-cases-files# kubectl describe node node1
Name:               node1
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
#这里他自己又新加了一些标签
                    network=underlay-nethost
                    networking.alibaba.com/dualstack-address-quota=empty
                    networking.alibaba.com/ipv4-address-quota=nonempty
                    networking.alibaba.com/ipv6-address-quota=empty
                    networking.alibaba.com/overlay-network-attachment=true
                    networking.alibaba.com/underlay-network-attachment=true
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 27 Dec 2022 13:10:25 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Thu, 29 Dec 2022 13:16:47 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 29 Dec 2022 13:13:21 +0000   Thu, 29 Dec 2022 13:08:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 29 Dec 2022 13:13:21 +0000   Thu, 29 Dec 2022 13:08:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 29 Dec 2022 13:13:21 +0000   Thu, 29 Dec 2022 13:08:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 29 Dec 2022 13:13:21 +0000   Thu, 29 Dec 2022 13:08:21 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.202
  Hostname:    node1
Capacity:
  cpu:                1
  ephemeral-storage:  19475088Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             976228Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  17948241072
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             873828Ki
  pods:               110
System Info:
  Machine ID:                 0606fe18da484d298e76da5c8499679e
  System UUID:                c2124d56-b6a3-ff1b-47e9-bc0ffb1811e2
  Boot ID:                    9ef07902-082d-45f0-8d7b-ab34c474f2f8
  Kernel Version:             5.4.0-42-generic
  OS Image:                   Ubuntu 20.04.1 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.22
  Kubelet Version:            v1.24.4
  Kube-Proxy Version:         v1.24.4
PodCIDR:                      10.200.1.0/24
PodCIDRs:                     10.200.1.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
  kube-system                 calico-typha-b4fd58bf4-dsptx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47h
  kube-system                 hybridnet-daemon-qm97c               0 (0%)        0 (0%)      0 (0%)           0 (0%)         47h
  kube-system                 hybridnet-manager-8bcf9d978-fvqmg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47h
  kube-system                 hybridnet-webhook-66899f995-wjshx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47h
  kube-system                 kube-proxy-dhmt4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-1Gi      0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                  From             Message
  ----    ------                   ----                 ----             -------
  Normal  NodeNotReady             10m                  node-controller  Node node1 status is now: NodeNotReady
  Normal  NodeHasSufficientMemory  8m40s (x9 over 2d)   kubelet          Node node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    8m40s (x9 over 2d)   kubelet          Node node1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     8m40s (x9 over 2d)   kubelet          Node node1 status is now: NodeHasSufficientPID
  Normal  NodeNotReady             8m40s                kubelet          Node node1 status is now: NodeNotReady
  Normal  NodeReady                8m29s (x2 over 47h)  kubelet          Node node1 status is now: NodeReady
  • 测试-overlay

他不像flannel与calico不会建立路由表

他是通过内核创建iptables规则,有一个30003走的是vxlan 所以他就不需要路由表了

#当你创建这个underlay网络你会发现你的calico也可以使用了
root@master1:~/hybridnet/2.underlay-cases-files# kubectl create ns myserver
namespace/myserver created
root@master1:~/hybridnet/2.underlay-cases-files# cat 2.tomcat-app1-overlay.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-overlay-label
  name: myserver-tomcat-app1-deployment-overlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-overlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-overlay-selector
    spec:
      nodeName: node2
      containers:
      - name: myserver-tomcat-app1-container
        image: tomcat:7.0.93-alpine 
        # image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-overlay-label
  name: myserver-tomcat-app1-service-overlay
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30003
  selector:
    app: myserver-tomcat-app1-overlay-selector

root@node2:~# iptables-save | grep 30003
-A KUBE-NODEPORTS -p tcp -m comment --comment "myserver/myserver-tomcat-app1-service-overlay:http" -m tcp --dport 30003 -j KUBE-EXT-WFFMGFGZJCYAYZ6A

  • 测试-underlay
#使用了注解字段来强制区分是overlay网络还是underlay网络
root@master1:~/hybridnet/2.underlay-cases-files# cat 3.tomcat-app1-underlay.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-underlay-label
  name: myserver-tomcat-app1-deployment-underlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-underlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-underlay-selector
      annotations: #使用Underlay或者Overlay网络
        networking.alibaba.com/network-type: Underlay
    spec:
      #nodeName: k8s-node2.example.com
      containers:
      - name: myserver-tomcat-app1-container
        image: tomcat:7.0.93-alpine 
        # image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-underlay-label
  name: myserver-tomcat-app1-service-underlay
  namespace: myserver
spec:
#  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    #nodePort: 40003
  selector:
    app: myserver-tomcat-app1-underlay-selector

#验证 在这里就可以看到两种网络鲜明的对比
root@master1:~/hybridnet/2.underlay-cases-files# kubectl get pod -n myserver -o wide
NAME                                                        READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-869ccb6f58-82lcw    1/1     Running   0          12m    10.200.0.6   node2   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-vpmgk   1/1     Running   0          3m1s   10.0.0.5     node3   <none>           <none>

直接用POD IP加svc端口即可访问

在扩容一下看看IP顺序

root@master1:~/hybridnet/2.underlay-cases-files# kubectl scale -n myserver deployment myserver-tomcat-app1-deployment-underlay  --replicas=3
root@master1:~/hybridnet/2.underlay-cases-files# kubectl get pod -n myserver -o wide
NAME                                                        READY   STATUS    RESTARTS        AGE     IP           NODE    NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-869ccb6f58-82lcw    1/1     Running   1 (2m22s ago)   21m     10.200.0.6   node2   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-jwzpr   1/1     Running   0               5m22s   10.0.0.7     node1   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-vpmgk   1/1     Running   1 (2m54s ago)   11m     10.0.0.5     node3   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-znqc9   1/1     Running   1 (2m22s ago)   5m22s   10.0.0.6     node2   <none>           <none>

  • 通过service IP访问Pod

Pod IlP可以是固定的,但是很多时候会是动态变化的,那么如何通过一个固定的方式访问其中的服务呢?SVC的IP删除再重建是不会变化的,可以把某个域名通过自建的DNS解析到SVC,如果SVC的IP发生变化可以修改域名的A记录。
注意:后期如果要访问SVC则需要在网络设备配置静态路由,打通从客户端到SVC的通信,把请求转发到具体的kubernetes node节点响应,如:Linux添加静态路由:
~# route add -net 172.31.5.0 netmask 255.255.255.0 gateway 172.31.6.204

root@master1:~/hybridnet/2.underlay-cases-files# cat 4-pod-underlay.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: annotations-demo
  annotations:
    networking.alibaba.com/network-type: Underlay
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80

#大致是这么个过程 需要把svc添加到路由表中 但是我没成功 应该是我的虚拟机环境与主机环境网络不通的原因
root@master1:~/hybridnet/2.underlay-cases-files# route add  -net 10.100.15.0 netmask 255.255.255.0 gateway 10.0.0.6
root@master1:~/hybridnet/2.underlay-cases-files# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
10.100.15.0     10.0.0.6        255.255.255.0   UG    0      0        0 ens33
10.100.15.0     10.0.0.202      255.255.255.0   UG    0      0        0 ens33
root@master1:~/hybridnet/2.underlay-cases-files# kubectl get svc -n myserver 
NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
myserver-tomcat-app1-service-overlay    NodePort    10.100.65.194   <none>        80:30003/TCP   49m
myserver-tomcat-app1-service-underlay   ClusterIP   10.100.15.87    <none>        80/TCP         39m
root@master1:~/hybridnet/2.underlay-cases-files# kubectl get pod -n myserver -o wide
NAME                                                        READY   STATUS    RESTARTS      AGE   IP           NODE    NOMINATED NODE   READINESS GATES
myserver-tomcat-app1-deployment-overlay-869ccb6f58-82lcw    1/1     Running   1 (31m ago)   50m   10.200.0.6   node2   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-jwzpr   1/1     Running   0             34m   10.0.0.7     node1   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-vpmgk   1/1     Running   1 (31m ago)   40m   10.0.0.5     node3   <none>           <none>
myserver-tomcat-app1-deployment-underlay-589d8bcd48-znqc9   1/1     Running   1 (31m ago)   34m   10.0.0.6     node2   <none>           <none>

33.11.4 配置k8s默认网络行为

默认为overlay 网络,如果使用underlay的pod比较多,也可以修改为在创建pod的时候,没有指定使用网络类型的pod默认使用underlay网络。

默认网络行为从underlay修改为Overlay:
helm upgrade hybridnet hybridnet/hybridnet -n kube-system --set defualtNetworkType=Overlay
或:
kubectl edit deploy hybridnet-webhook -n kube-systeml
kubectl edit deploy hybridnet-manager -n kube-system

#配置默认规则后,在创建pod时如未指定使用具体的网络类型就会使用具体配置的默认网络。

标签:网络通信,master1,hybridnet,root,Running,myserver,K8S,underlay
From: https://www.cnblogs.com/yidadasre/p/17019680.html

相关文章

  • Kubernetes(k8s) kubectl convert常用命令
    kubectl在$HOME/.kube目录中查找一个名为config的配置文件。可以通过设置KUBECONFIG环境变量或设置--kubeconfig参数来指定其它kubeconfig文件。本文主要介绍K......
  • K8S服务健康检测方式
    Liveness和Readiness的三种使用方式readinessProbe:#定义只有http检测容器6379端口请求返回是200-400,则接收下面的Serviceweb-svc的请求httpGet:......
  • centos8使用kubeadm搭建高可用k8s集群
    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。这个工具能通过两条指令完成一个kubernetes集群的部署:#创建一个Master节点kubeadminit#将一个......
  • 【K8S】MAC搭建K8S集群
    参考资料​​全网最系统、最清晰!深入微服务架构——Docker和K8s详解-哔哩哔哩​​​​Mac+Docker+K8S本地搭建K8S集群_GoLang成长之​​Mac系统安装k8s集群_51CTO博......
  • 【云原生-K8s】cka认证2022年12月最新考题及指南
    最新消息题型到目前为止,题型还是没有太大的变化,如果对于k8s零基础还是建议通过网上报班系统性的学习,如果对linux和k8s常用命令熟悉则无需报班,在某宝花个100块左右购买辅助资......
  • k8s部署jenkins
    1.部署Jenkins版本:2.375.1创建命名空间:kubectlcreatenamespacekube-ops创建PVC,为Jenkins提供数据持久化:mkdir-p/root/jenkins_install&&cd/root/jenkins......
  • k8s教程(22)-pod调度总结
    文章目录​​01概述​​​​1.1Pod调度控制器分类​​​​1.2RC到Deployment的发展​​​​1.2.1ReplicaSet​​​​1.3Pod调度​​​​1.3.1情景​​​​1.3.2存在......
  • k8s教程(25)-pod扩缩容
    文章目录​​01引言​​​​02手动扩缩容机制​​​​03自动扩缩容机制​​​​3.1HPA控制器​​​​3.2指标的类型​​​​3.3扩缩容算法​​​​3.4HorizontalPodA......
  • k8s教程(21)-pod之容灾调度
    文章目录​​01引言​​​​02如何实现?​​​​03举例​​​​04文末​​01引言声明:本文为《Kubernetes权威指南:从Docker到Kubernetes实践全接触(第5版)》的读书笔记我们......
  • 使用MicroK8s部署第一个应用程序
    Kubernetes具有挑战性。对此,没有任何争议。不仅在将容器部署到Kubernetes集群中有很多移动部件,而且在这一过程中可能会出现很多问题。更为复杂的是,部署Kubernetes集群可能是......