首页 > 其他分享 >kubernetes v1.27.2安装并配置calico网络为BGP模式

kubernetes v1.27.2安装并配置calico网络为BGP模式

时间:2023-05-24 15:47:43浏览次数:47  
标签:node kubernetes 0.0 192.168 BGP master v1.27 go 节点

1. 集群信息

机器均为2C4G的虚拟机,硬盘为60G,系统版本均为centos7.9

IP Hostname OS blade
192.168.63.61 master.sec.com centos7.9 master
192.168.63.62 node01.sec.com centos7.9 worker
192.168.63.63 node02.sec.com centos7.9 worker

2. 基础系统配置

2.1. 主机名修改

# 修改master节点主机名
hostnamectl set-hostname master.sec.com
hostname master.sec.com
# 修改node01节点主机名
hostnamectl set-hostname node01.sec.com
hostname node01.sec.com
# 修改node02节点主机名
hostnamectl set-hostname node02.sec.com
hostname node02.sec.com

2.2. IP地址修改

# master主机IP
cat /etc/sysconfig/network-scripts/ifcfg-ens192
TYPE="Ethernet"
NAME="ens192"
DEVICE="ens192"
ONBOOT="yes"
IPADDR="192.168.63.61"
PREFIX="24"
GATEWAY="192.168.63.1"
DNS1="192.168.64.3"
DNS2="223.5.5.5"

# node01主机IP
cat /etc/sysconfig/network-scripts/ifcfg-ens192
TYPE="Ethernet"
NAME="ens192"
DEVICE="ens192"
ONBOOT="yes"
IPADDR="192.168.63.62"
PREFIX="24"
GATEWAY="192.168.63.1"
DNS1="192.168.64.3"
DNS2="223.5.5.5"


# node02主机IP
cat /etc/sysconfig/network-scripts/ifcfg-ens192
TYPE="Ethernet"
NAME="ens192"
DEVICE="ens192"
ONBOOT="yes"
IPADDR="192.168.63.63"
PREFIX="24"
GATEWAY="192.168.63.1"
DNS1="192.168.64.3"
DNS2="223.5.5.5"

# 重启网卡命令
systemctl restart network

2.3. 防火墙关闭

# 在master节点和node节点都需要执行
systemctl stop firewalld
systemctl disable firewalld

2.4. selinux关闭

# 在master节点和node节点都需要执行
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2.5. ssh连接优化

# 在master节点和node节点都需要执行
sed -i 's/^#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
systemctl reload sshd

2.6. 安装一些工具

# 在master节点和node节点都需要执行
yum update -y && yum install curl wget tree net-tools vim ntpdate -y

# 安装ipvsadm
yum -y install ipset ipvsadm

2.7. 时间定时同步

# 在master节点和node节点都需要执行,配置时间同步器为阿里云
cat <<EOF >> /var/spool/cron/root
0 */1 * * * /usr/sbin/ntpdate ntp1.aliyun.com
EOF

# 在master节点和node节点都需要执行,取消中断弹出mail发送的消息
echo "unset MAILCHECK" >>/etc/profile && source /etc/profile

2.8. hosts配置

# 在master节点和node节点都需要执行
cat <<EOF >> /etc/hosts
192.168.63.61 master master.sec.com
192.168.63.62 node01 node01.sec.com
192.168.63.63 node02 node02.sec.com
EOF

2.9. machine-id修改

# 在master节点和node节点都需要执行,主要是为了清除克隆机器machine-id值一样的问题

rm -f /etc/machine-id && systemd-machine-id-setup

2.10. 关闭swap

# 在master节点和node节点都需要执行
sed -i 's/.*swap.*/#&/g' /etc/fstab
swapoff -a

2.11. 内核参数修改

# 在master节点和node节点都需要执行
cat <<EOF >> /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

2.12. ipvs模块加载

# 在master节点和node节点都需要执行
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# 查看模块是否已加载
lsmod | grep ip_vs
lsmod | grep nf_conntrack

# 设置脚本,让ipvs加载模块
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# 设置脚本,进行开机ipvs自行加载模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

3. Docker部署

3.1. docker镜像源配置

# 在master节点和node节点都需要执行,配置安装docker使用阿里云镜像源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 重建yum源
yum makecache

3.2. docker-ce安装

# 在master节点和node节点都需要执行,安装docker
yum install docker-ce docker-ce-cli containerd.io -y

3.3. docker配置cgroup和加速源

cat <<EOF >>/etc/docker/daemon.json
{
  "registry-mirrors": [
	  "https://docker.nju.edu.cn/"
  ],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

3.4. cri-docker安装

# kubernetes 1.24只有抛弃对接docker-sim,如果想要把docker座位kubernetes的容器环境需要安装cri-docker
# 在master节点和node节点都需要执行
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm

3.5. 运行docker

# 在master节点和node节点都需要执行
systemctl start docker && systemctl enable docker

3.6. 运行cri-docker

# 在master节点和node节点都需要执行
systemctl start cri-docker.service cri-docker.socket && systemctl enable cri-docker.service cri-docker.socket

# 运行后,可在/var/run目录下找到/var/run/cri-dockerd.sock文件

4. Kubernetes部署

4.1 kubernetes镜像源配置

# 在master节点和node节点都需要执行
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.2 配置cri-docker启动镜像源

# 在master节点和node节点都需要执行
vim /usr/lib/systemd/system/cri-docker.service
# 修改/usr/lib/systemd/system/cri-docker.service的ExecStart内容为如下内容

ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

# 在master节点和node节点都需要执行,修改完成后的重启进程
systemctl daemon-reload
systemctl restart cri-docker.service cri-docker.socket
  • 修改完成后的/usr/lib/systemd/system/cri-docker.service文件样例
# 修改完成后的/usr/lib/systemd/system/cri-docker.service文件
cat /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

4.3. 安装kubernetes master

  • 如下造作仅在master节点服务器上进行

4.3.1. 安装kubernetes组件

yum install -y kubelet kubeadm kubectl

4.3.2. 配置kebelet为systemd

cat <<EOF >>/etc/default/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
EOF

4.3.3. 启动kebelet组件

systemctl enable kubelet && systemctl start kubelet

4.3.4. 下载kubernetes镜像

# 预先下载镜像能够解决kubernetes安装时网络慢下载慢的问题,需要注意的是要加上 --cri-socket unix:///var/run/cri-dockerd.sock
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --cri-socket unix:///var/run/cri-dockerd.sock

4.3.5. 初始化kubernetes

# 指定镜像源为阿里云,指定pod节点IP为10.224.0.0/16网段,指定service网段为10.96.0.0/12,指定kubernetes版本为1.27.2,指定cri-dockerd路径
kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.224.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.63.61 \
--kubernetes-version=v1.27.2 \
--cri-socket unix:///var/run/cri-dockerd.sock

4.3.6. 安装完成后的环境变量配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# kubernetes安装成功后会提示如何加入集群
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.63.61:6443 --token byjaat.knb8kma4j3zof9qf \
        --discovery-token-ca-cert-hash sha256:920c7aee5791e6b6b846d78d59953d609ff02fdcebc00bb644fe1696a97d5011

4.4. 安装kubernetes worker

4.4.1. worker节点安装kubernetes组件

# 所有work节点都需要执行
yum install install -y kubelet kubeadm

4.4.2. 配置kebelet为systemd

# 所有work节点都需要执行
cat <<EOF > /etc/default/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
EOF

4.4.3. 启动kebelet组件

# 所有work节点都需要执行
systemctl start kubelet && systemctl enable kubelet

4.4.4. work节点加入集群

# 所有work节点都需要执行,并且需要带上 --cri-socket unix:///var/run/cri-dockerd.sock
-> kubeadm join 192.168.63.61:6443 --token byjaat.knb8kma4j3zof9qf \
        --discovery-token-ca-cert-hash sha256:920c7aee5791e6b6b846d78d59953d609ff02fdcebc00bb644fe1696a97d5011 \
        --cri-socket unix:///var/run/cri-dockerd.sock

# 节点加入集群后的状态()
-> kubectl get nodes
NAME             STATUS   ROLES                  AGE     VERSION
master.sec.com   Ready    control-plane,master   66m     v1.27.2
node01.sec.com   Ready    <none>                 2m32s   v1.27.2
node02.sec.com   Ready    <none>                 2m32s   v1.27.2

4.5 安装calico网络组件

# 仅在master节点执行
mkdir /root/kubernetes && cd /root/kubernetes && wget https://github.com/projectcalico/calico/releases/download/v3.25.1/release-v3.25.1.tgz
tar xf release-v3.25.1.tgz && mv release-v3.25.1 Calico-v3.25.1
cd Calico-v3.25.1/images
docker load -i calico-cni.tar
docker load -i calico-dikastes.tar
docker load -i calico-flannel-migration-controller.tar
docker load -i calico-kube-controllers.tar
docker load -i calico-node.tar
docker load -i calico-pod2daemon.tar
docker load -i calico-typha.tar

# 按官网文档运行tigera-operator.yaml和custom-resources.yaml
cd /root/kubernetes/Calico-v3.25.1/manifests/

# 不用修改直接运行
kubectl create -f tigera-operator.yaml

# 需要将custom-resources.yaml文件内的192.168.0.0/16修改为初始化kubernetes内的10.224.0.0/16
kubectl create -f custom-resources.yaml
  • 修改后的custom-resources.yaml文件
cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.224.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

4.6 安装calicoctl工具

# 可在master节点与worker节点都安装
wget https://github.com/projectcalico/calico/releases/download/v3.25.1/calicoctl-linux-amd64 -O /usr/local/bin/calicoctl && chmod +x /usr/local/bin/calicoctl
  • 查看calico网络信息,默认情况下使用node-to-node mesh
-> calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.63.62 | node-to-node mesh | up    | 05:16:12 | Established |
| 192.168.63.63 | node-to-node mesh | up    | 05:16:13 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

4.7. 配置calico 为BGP模式

4.7.1. Calico配置全局BGP需要的yaml文件

cat calico-bgp-configuration.yaml
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
  name: default
spec:
  logSeverityScreen: Info
  # 关闭calico默认的node-to-node mesh
  nodeToNodeMeshEnabled: false
  # 配置BGP集群的as号65002
  asNumber: 65002
  # 配置BGP发布内容service cluster网段
  serviceClusterIPs:
  - cidr: 10.96.0.0/12
  # 配置BGP发布service external cluster网段,存在loadbance时可用
  serviceExternalIPs:
  - cidr: 10.112.0.0/12
  listenPort: 179
  bindMode: NodeIP
  communities:
  - name: bgp-large-community
    value: 65002:300:100
  # 配置BGP发布pod网段
  prefixAdvertisements:
  - cidr: 10.224.0.0/16
    communities:
    - bgp-large-community
    - 65002:120

4.7.2. Calico配置BGP邻居需要的yaml文件

cat calico-bgp-peer.yaml
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  # 配置对端BGP name
  name: vyos02-peer
spec:
  # 配置对端BGP IP
  peerIP: 192.168.63.1
  # 配置BGP宣发路由上只发布对应node的
  keepOriginalNextHop: true
  # 配置对端as号
  asNumber: 65001

4.7.3. VYOS运行BGP的配置文件

# Create policy
set policy route-map setmet rule 2 action 'permit'
set policy route-map setmet rule 2 set as-path prepend '2 2 2'

# 配置vyos bgp的as号为65001
set protocols bgp system-as 65001

# 配置master bgp 
set protocols bgp neighbor 192.168.63.61 remote-as 65002
set protocols bgp neighbor 192.168.63.61 address-family ipv4-unicast route-map import 'setmet'
set protocols bgp neighbor 192.168.63.61 address-family ipv4-unicast soft-reconfiguration 'inbound'

# 配置master node01
set protocols bgp neighbor 192.168.63.62 remote-as 65002
set protocols bgp neighbor 192.168.63.62 address-family ipv4-unicast route-map import 'setmet'
set protocols bgp neighbor 192.168.63.62 address-family ipv4-unicast soft-reconfiguration 'inbound'

# 配置master node02
set protocols bgp neighbor 192.168.63.62 remote-as 65002
set protocols bgp neighbor 192.168.63.62 address-family ipv4-unicast route-map import 'setmet'
set protocols bgp neighbor 192.168.63.62 address-family ipv4-unicast soft-reconfiguration 'inbound'

# 执行配置
commit
save

4.8. 查看BGP关系

4.8.1. master节点查看BGP

[root@master ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE |  SINCE   |    INFO     |
+--------------+-----------+-------+----------+-------------+
| 192.168.63.1 | global    | up    | 04:36:12 | Established |
+--------------+-----------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@master ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.63.1    0.0.0.0         UG    100    0        0 ens192
10.224.17.0     192.168.63.63   255.255.255.192 UG    0      0        0 ens192
10.224.76.192   192.168.63.62   255.255.255.192 UG    0      0        0 ens192
10.224.213.128  0.0.0.0         255.255.255.192 U     0      0        0 *
10.224.213.129  0.0.0.0         255.255.255.255 UH    0      0        0 calia2c5837fe2e
10.224.213.130  0.0.0.0         255.255.255.255 UH    0      0        0 cali9a6003d498a
10.224.213.131  0.0.0.0         255.255.255.255 UH    0      0        0 cali4db18097f4b
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.63.0    0.0.0.0         255.255.255.0   U     100    0        0 ens192
[root@master Calico-BGP]#

4.8.2. node01节点查看BGP

[root@node01 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE |  SINCE   |    INFO     |
+--------------+-----------+-------+----------+-------------+
| 192.168.63.1 | global    | up    | 04:36:33 | Established |
+--------------+-----------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@node01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.63.1    0.0.0.0         UG    100    0        0 ens192
10.224.17.0     192.168.63.63   255.255.255.192 UG    0      0        0 ens192
10.224.76.192   0.0.0.0         255.255.255.192 U     0      0        0 *
10.224.76.193   0.0.0.0         255.255.255.255 UH    0      0        0 cali4ef7bfc4e13
10.224.76.194   0.0.0.0         255.255.255.255 UH    0      0        0 cali16db9dcfcda
10.224.213.128  192.168.63.61   255.255.255.192 UG    0      0        0 ens192
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.63.0    0.0.0.0         255.255.255.0   U     100    0        0 ens192
[root@node01 ~]#

4.8.3. node02节点查看BGP

[root@node02 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-----------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE |  SINCE   |    INFO     |
+--------------+-----------+-------+----------+-------------+
| 192.168.63.1 | global    | up    | 04:36:56 | Established |
+--------------+-----------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@node02 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.63.1    0.0.0.0         UG    100    0        0 ens192
10.224.17.0     0.0.0.0         255.255.255.192 U     0      0        0 *
10.224.17.1     0.0.0.0         255.255.255.255 UH    0      0        0 cali20737978eb7
10.224.17.2     0.0.0.0         255.255.255.255 UH    0      0        0 cali740bd33e0a8
10.224.17.3     0.0.0.0         255.255.255.255 UH    0      0        0 cali00219c6cb15
10.224.76.192   192.168.63.62   255.255.255.192 UG    0      0        0 ens192
10.224.213.128  192.168.63.61   255.255.255.192 UG    0      0        0 ens192
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.63.0    0.0.0.0         255.255.255.0   U     100    0        0 ens192
[root@node02 ~]#

4.8.4. vyos查看BGP

vyos@vyos02:~$ show ip bgp summary

IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.63.1, local AS number 65001 vrf-id 0
BGP table version 48
RIB entries 9, using 1728 bytes of memory
Peers 3, using 2172 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.63.61   4      65002       458       398        0    0    0 04:13:47            6        5 N/A
192.168.63.62   4      65002       324       282        0    0    0 04:13:26            6        5 N/A
192.168.63.63   4      65002       324       285        0    0    0 04:13:04            6        5 N/A

Total number of neighbors 3
vyos@vyos02:~$

vyos@vyos02:~$ show ip route bgp
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

B>* 10.96.0.0/12 [20/0] via 192.168.63.61, eth8, weight 1, 00:53:21
  *                     via 192.168.63.62, eth8, weight 1, 00:53:21
  *                     via 192.168.63.63, eth8, weight 1, 00:53:21
B>* 10.112.0.0/12 [20/0] via 192.168.63.61, eth8, weight 1, 00:53:21
  *                      via 192.168.63.62, eth8, weight 1, 00:53:21
  *                      via 192.168.63.63, eth8, weight 1, 00:53:21
B>* 10.224.17.0/26 [20/0] via 192.168.63.63, eth8, weight 1, 04:11:46
B>* 10.224.76.192/26 [20/0] via 192.168.63.62, eth8, weight 1, 04:11:46
B>* 10.224.213.128/26 [20/0] via 192.168.63.61, eth8, weight 1, 04:11:46
vyos@vyos02:~$

4.9. 设置kube-proxy模式为IPVS模式

4.9.1. 查看现有kube-proxy运行模式

  • kube-proxy默认采用iptables作为代理,而iptables的性能有限,不适合生产环境,需要改为IPVS模式
-> kubectl get pods -n kube-system | grep 'kube-proxy'
kubectl get pods -n kube-system | grep 'kube-proxy'
kube-proxy-2nln5                         1/1     Running   2 (26h ago)   2d14h
kube-proxy-c99m8                         1/1     Running   2 (26h ago)   2d14h
kube-proxy-w7gvk                         1/1     Running   2 (26h ago)   44h

-> kubectl logs -n kube-system kube-proxy-2nln5
I0523 03:44:23.156364       1 node.go:141] Successfully retrieved node IP: 192.168.63.62
I0523 03:44:23.156645       1 server_others.go:110] "Detected node IP" address="192.168.63.62"
I0523 03:44:23.157205       1 server_others.go:551] "Using iptables proxy"
I0523 03:44:23.327111       1 server_others.go:190] "Using iptables Proxier"
I0523 03:44:23.327242       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0523 03:44:23.327293       1 server_others.go:198] "Creating dualStackProxier for iptables"
I0523 03:44:23.327349       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
I0523 03:44:23.327522       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0523 03:44:23.329166       1 server.go:657] "Version info" version="v1.27.2"
I0523 03:44:23.329207       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0523 03:44:23.336171       1 conntrack.go:100] "Set sysctl" entry="net/netfilter/nf_conntrack_max" value=131072
I0523 03:44:23.336280       1 conntrack.go:52] "Setting nf_conntrack_max" nfConntrackMax=131072
I0523 03:44:23.337219       1 conntrack.go:83] "Setting conntrack hashsize" conntrackHashsize=32768
I0523 03:44:23.344639       1 conntrack.go:100] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_close_wait" value=3600
I0523 03:44:23.345321       1 config.go:188] "Starting service config controller"
I0523 03:44:23.345418       1 shared_informer.go:311] Waiting for caches to sync for service config
I0523 03:44:23.345488       1 config.go:97] "Starting endpoint slice config controller"
I0523 03:44:23.345510       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0523 03:44:23.346859       1 config.go:315] "Starting node config controller"
I0523 03:44:23.346882       1 shared_informer.go:311] Waiting for caches to sync for node config
I0523 03:44:23.446311       1 shared_informer.go:318] Caches are synced for endpoint slice config
I0523 03:44:23.446434       1 shared_informer.go:318] Caches are synced for service config
I0523 03:44:23.447822       1 shared_informer.go:318] Caches are synced for node config

4.9.2. kube-proxy修改为IPVS的前提

  • 需要安装ipset和ipvsadm,并加载ipvs模块,本文安装kubernetes时已经设置,可参考 2.12. 章节内容
# 安装软件
-> yum install -y ipset ipvsadm

# 设置脚本,让ipvs加载模块
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# 设置脚本,进行开机ipvs自行加载模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

# 查看模块是否已加载
lsmod | grep ip_vs
lsmod | grep nf_conntrack

4.9.3. 设置kube-proxy为IPVS模式

-> kubectl edit configmap kube-proxy -n kube-system
mode: ""#原配置此处为空,需要修改为mode: "ipvs"
  • 修改情况后的配置
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.224.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocal:
      bridgeInterface: ""
      interfaceNamePrefix: ""
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      localhostNodePorts: null
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"#原配置此处为空,
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    winkernel:
      enableDSR: false
      forwardHealthCheckVip: false
      networkName: ""
      rootHnsEndpointName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://192.168.63.61:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  annotations:
    kubeadm.kubernetes.io/component-config.hash: sha256:02b3b1ca77e40c710a67aa9cc54621a6534e738e98f7100a9f4bc76752dd705c
  creationTimestamp: "2023-05-21T15:15:18Z"
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "264"
  uid: 20163b91-d0e3-4774-8764-6738905e9739

4.9.4. 重启kube-proxy组件并检查模式

kubectl rollout restart daemonset kube-proxy -n kube-system 
  • 检查kube-proxy运行模式,重启kube-proxy进程后容器Pods会销毁并重新创建
-> kubectl get pods -n kube-system | grep 'kube-proxy'
kube-proxy-m4fpv                         1/1     Running   0             19s
kube-proxy-qn6bd                         1/1     Running   0             16s
kube-proxy-rqcv8                         1/1     Running   0             24s

-> kubectl logs -n kube-system kube-proxy-m4fpv
I0524 06:24:40.484072       1 node.go:141] Successfully retrieved node IP: 192.168.63.61
I0524 06:24:40.484455       1 server_others.go:110] "Detected node IP" address="192.168.63.61"
I0524 06:24:40.661521       1 server_others.go:263] "Using ipvs Proxier"
I0524 06:24:40.661627       1 server_others.go:265] "Creating dualStackProxier for ipvs"
I0524 06:24:40.661731       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
E0524 06:24:40.662559       1 proxier.go:354] "Can't set sysctl, kernel version doesn't satisfy minimum version requirements" sysctl="net/ipv4/vs/conn_reuse_mode" minimumKernelVersion="4.1"
I0524 06:24:40.663593       1 proxier.go:408] "IPVS scheduler not specified, use rr by default"
E0524 06:24:40.664889       1 proxier.go:354] "Can't set sysctl, kernel version doesn't satisfy minimum version requirements" sysctl="net/ipv4/vs/conn_reuse_mode" minimumKernelVersion="4.1"
I0524 06:24:40.665195       1 proxier.go:408] "IPVS scheduler not specified, use rr by default"
I0524 06:24:40.665860       1 ipset.go:116] "Ipset name truncated" ipSetName="KUBE-6-LOAD-BALANCER-SOURCE-CIDR" truncatedName="KUBE-6-LOAD-BALANCER-SOURCE-CID"
I0524 06:24:40.665918       1 ipset.go:116] "Ipset name truncated" ipSetName="KUBE-6-NODE-PORT-LOCAL-SCTP-HASH" truncatedName="KUBE-6-NODE-PORT-LOCAL-SCTP-HAS"
I0524 06:24:40.666352       1 server.go:657] "Version info" version="v1.27.2"
I0524 06:24:40.666388       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0524 06:24:40.674919       1 conntrack.go:52] "Setting nf_conntrack_max" nfConntrackMax=131072
I0524 06:24:40.677792       1 config.go:97] "Starting endpoint slice config controller"
I0524 06:24:40.677847       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0524 06:24:40.681990       1 config.go:315] "Starting node config controller"
I0524 06:24:40.682059       1 shared_informer.go:311] Waiting for caches to sync for node config
I0524 06:24:40.683166       1 config.go:188] "Starting service config controller"
I0524 06:24:40.683193       1 shared_informer.go:311] Waiting for caches to sync for service config
I0524 06:24:40.778125       1 shared_informer.go:318] Caches are synced for endpoint slice config
I0524 06:24:40.783189       1 shared_informer.go:318] Caches are synced for node config
I0524 06:24:40.784469       1 shared_informer.go:318] Caches are synced for service config

标签:node,kubernetes,0.0,192.168,BGP,master,v1.27,go,节点
From: https://www.cnblogs.com/amsilence/p/17428490.html

相关文章

  • 如何在 Kubernetes 下轻松抓取应用网络包
    在Kubernetes的实际使用过程中,我们经常会碰到一些业务上的异常问题,一般情况下通过日志监控和链路追踪足以能够对问题做出排查与诊断了。但是,在某些场景下,只靠这些手段往往是不够的,一些和网络相关的问题有时候非常棘手。奇妙的Linux世界Linux爱好者聚集地,各种硬核干货......
  • Kubernetes(k8s)最大启动时长研究
    一、前言应用部署在Kubernetes(k8s)上,有些应用启动慢一些,没启动好就又被k8s重启了二、处理过程1.看日志[2023-05-2314:38:52.249]|-INFO|-[background-preinit]|-o.h.v.i.u.Version[0]|-[TID:N/A]|-HV000001:HibernateValidator6.1.7.Final[2023-05-2314:40:11.817]|-......
  • webgpu_红色三角形_学习_wgsl
    /Users/song/Code/webgpu_learn/webgpu-for-beginners/webgpu_learn_typescript/index.html<!DOCTYPEhtml><htmllang="en"><head><metacharset="UTF-8"/><linkrel="icon"type="image/svg+xml&......
  • kubernetes部署Open-LDAP、Go-admin-ldap
    1.搭建openLDAP1.1.创建命名空间kubectlcreatenamespacekube-ops1.2.创建pvc存储使用的是nfs方式挂载,storageClassName为默认,所以可写可不写。mkdir-p~/ldap;cd~/ldapcat>pvc.yaml<<EOFapiVersion:v1kind:PersistentVolumeClaimmetadata:name:ldap-dat......
  • Kubernetes 控制平面组件:etcd
    Kubernetes控制平面组件:etcd¶etcd¶Etcd是CoreOS基于Raft开发的分布式key-value存储,可用于服务发现、共享配置以及一致性保障(如数据库选主、分布式锁等)。在分布式系统中,如何管理节点间的状态一直是一个难题,etcd像是专门为集群环境的服务发现和注册而设计,它提供了数据TTL失......
  • Kubernetes 初始化容器及静态Pod和Pod调度策略
    初始化容器kubernetes1.3版本引入了initcontainer初始化容器特性。主要用于在启动应用容器(appcontainer)前来启动一个或多个初始化容器,作为应用容器的一个基础。#查看要修改的内核参数[root@kmaster~]#sysctl-a|grepvm.overcommit_ratiovm.overcommit_ratio=50#输......
  • Kubernetes编程——什么是 Kubernetes 编程?
    什么是Kubernetes编程?  这里的Kubernetes编程指开发原生Kubernetes应用,这类应用通过与API服务器进行开发,直接查询、更新资源的状态。 这里不会在`Controller`和`Operator`中,这里也不会过多关注操作层面的东西,而是会关注开发和测试的阶段。 因此,我们会聊下......
  • 8 Kubernetes Scanner to find Security Vulnerability and Misconfiguration
    https://geekflare.com/kubernetes-security-scanner/YouareusingKubernetes.Great!Howaboutitssecurity?WeallknowthatKuberneteshasbecomeoneofthebestcontainerorchestrationplatformstoday.Morethan80%oforganizationstodayareleveraging......
  • 微服务注册中心之Zookeeper,Eureka,Nacos,Consul,Kubernetes区别
    目录1微服务注册中心1.1注册中心概念1.1.1为什么需要注册中心1.1.2如何实现一个注册中心1.1.3如何解决负载均衡的问题1.2注册中心如何选型1.2.1Zookeeper1.2.2Eureka1.2.3Nacos1.2.4Consul1.2.5Kubernetes1微服务注册中心微服务的注册中心目前主流的有以下五种:Zooke......
  • 使用nfs作为kubernetes动态storageClass存储
    使用nfs作为kubernetes动态storageClass存储1、StorageClass介绍参考网址:https://github.com/kubernetes-retired/external-storage​https://blog.51cto.com/xuexinhuan/5394844StorageClass对象会定义下面两部分内容:1:PV的属性.比如,存储类型,Vo......