首页 > 系统相关 >BC-Linux for Euler部署K8S-1.27

BC-Linux for Euler部署K8S-1.27

时间:2023-07-27 11:57:28浏览次数:51  
标签:11 bclinux BC -- Linux docker K8S root 节点

目录

1、介绍及说明

  • 背景

​ 大云企业操作系统BC-Linux for Euler 是以 openEuler 社区操作系统为基础,借助开源社区的开放优势,通过定制化手段研发的企业级 Linux 操作系统;目前主要在移动内部是用,是国产化改造的主力系统;本次部署使用BCLinux for Euler 21.10;本次使用kubeadm部署k8s-1.27.4双栈环境(ipv6为主)

1.1 主机信息

主机名 内核版本 IPV4地址 IPV6地址 角色
bclinux-11 4.19.90-2107 172.168.80.11 1:2:3:4::11 master
bclinux-12 4.19.90-2107 172.168.80.12 1:2:3:4::12 master
bclinux-13 4.19.90-2107 172.168.80.13 1:2:3:4::13 node

1.2 部署组件及规划

组件 组件说明 规划 规划推荐
nginx 单节点k8s时不需要部署此组件,集群时主要让api-server实现负载均衡能力 部署在俩台master节点 多master节点时,找任意俩台或三台都可以
keepalived 单节点k8s时不需要部署此组件,集群时主要给nginx提供高可用能力 同上 同上
docker 主要负责运行容器 所有节点都部署
cri-dockerd k8s1.24版本后,不再维护调用docker的接口,所以需要通过cri-docker来调用docker 所有节点都部署
containerd 也是运行容器的组件,与docker二选一,任意一个组件都可以 所有节点都部署
kubectl k8s命令行工具,管理k8s集群用的 master节点安装 master节点安装
kubeadm k8s的一个工具箱,可以创建k8s集群及加入集群 所有节点都部署
kubelet 运行在所有节点,主要作用是管理节点上的容器和pod,并对pod和节点做监控 所有节点都部署
kube-proxy 运行在虽有节点,主要作用是维护service的通信及负载均衡机制 所有节点以容器方式运行
kube-scheduler 运行在master节点,是k8s的调度器,将pod分配到合适的节点运行 master节点以容器方式运行
kube-controller-manager 运行在master节点,是k8s的控制器,控制pod的副本数、镜像版本等 master节点以容器方式运行
kube-apiserver 运行在master节点,是k8s的消息接收,所有组件都经过api-server master节点以容器方式运行
coredns 运行在任意节点,是k8s内部的DNS服务,负责解析内部的svc名称,还能代理外部的dns服务器 任意k8s节点以容器方式运行
etcd 可内置也可外置,是k8s的数据库,k8s的数据都存储到etcd中 可容器运行或自己单独部署 单节点推荐内置,集群推荐自己部署
CNI-calico k8s中的网络服务,pod跨主机通信主要是CNI网络实现的 所有节点容器方式运行
metrics-server k8s中查看pod和node的资源使用情况 任意k8s节点以容器方式运行

2、基础优化

2.1 开启ipv6

  1. 检查是否支持ipv6
[root@bclinux-11 ~]# cat /proc/net/if_inet6 # 查看是否有输出,有输出就支持ipv6
[root@bclinux-11 ~]# grep ipv6 /etc/sysctl.conf 
#开启ipv6
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
  1. 配置ipv6
[root@bclinux-11 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens160 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
DEVICE=ens160
ONBOOT=yes
IPADDR=172.168.80.11
PREFIX=24
GATEWAY=172.168.80.2
DNS1=114.114.114.114
IPV6_PRIVACY=no
IPV6ADDR=1:2:3:4::11/64
IPV6_DEFAULTGW=1:2:3:4::2
[root@bclinux-11 ~]# ip a show ens160 # 检查是否有ipv6地址
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:16:ba:aa brd ff:ff:ff:ff:ff:ff
    inet 172.168.80.11/24 brd 172.168.80.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet6 1:2:3:4::11/64 scope global noprefixroute 
       valid_lft forever preferred_lft forever

2.2 修改文件最大数

[root@bclinux-11 ~]# egrep -v '^#|^$' /etc/security/limits.conf
* soft nofile 655350
* hard nofile 655350

2.3 配置hosts解析


[root@bclinux-11 ~]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
1:2:3:4::11 bclinux-11
1:2:3:4::12 bclinux-12
1:2:3:4::13 bclinux-13
172.168.80.11 bclinux-11
172.168.80.12 bclinux-12
172.168.80.13 bclinux-13

2.4 内核参数优化

[root@bclinux-11 ~]# cat /etc/sysctl.d/k8s.conf 
# ipv6配置
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
# ipv4配置
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.forwarding = 1

2.5 selinux关闭

[root@bclinux-11 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config  && setenforce 0

2.6 防火墙

  1. 防火墙关闭
  2. 配置防火墙规则,将k8s对应的端口开放或者将节点主机的开放不设限制

2.7 ipvs支持

  1. 检查是否加载ipvs
[root@bclinux-11 ~]# lsmod | grep ip_vs # 有结果就加载了,没有结果就没有加载
  1. 加载ipvs
[root@bclinux-11 ~]# cat >/etc/sysconfig/modules/ipvs.modules <<-"EOF"
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir |grep -o "^[^.]*");do
    /sbin/modinfo -F filename $mod &>/dev/null
    if [ $? -eq 0 ];then
        /sbin/modprobe $mod
    fi
done
EOF
[root@bclinux-11 ~]# chown 755 /etc/sysconfig/modules/ipvs.modules && /etc/sysconfig/modules/ipvs.modules
  1. 检查
[root@bclinux-11 ~]# lsmod | grep ip_vs
ip_vs_wrr              16384  0
ip_vs_wlc              16384  0
ip_vs_sh               16384  0
ip_vs_sed              16384  0
ip_vs_rr               16384  0
ip_vs_pe_sip           16384  0
nf_conntrack_sip       32768  1 ip_vs_pe_sip
ip_vs_ovf              16384  0
ip_vs_nq               16384  0
ip_vs_lc               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_ftp              16384  0
ip_vs_fo               16384  0
ip_vs_dh               16384  0
ip_vs                 172032  28 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_nat                 36864  3 nf_nat_ipv6,nf_nat_ipv4,ip_vs_ftp
nf_conntrack          163840  6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,nf_conntrack_sip,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  3 nf_conntrack,nf_nat,ip_vs

2.8 内核版本要求

  • 已知问题内核版本:3.10.0-957.el7.x86_64

  • 已知问题系统版本:centos7.6以前的版本

  • 问题现象:部署完成后,各组件、创建pod无问题,创建完service网络后,在pod里ping不通service网络的名称,同命名空间或跨空间都不可以ping通

  • 现用系统内核版本为4.19无需升级

2.9 yum源配置

[root@bclinux-11 ~]# ls /etc/yum.repos.d/
BCLinux.repo  centos8.repo  docker-ce.repo  kubernetes.repo
# BC-Linux源
[root@bclinux-11 ~]# cat /etc/yum.repos.d/BCLinux.repo 
[baseos]
name=BC-Linux-release - baseos
baseurl=http://mirrors.bclinux.org/bclinux/oe21.10/OS/$basearch/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-For-Euler
[everything]
name=BC-Linux-release - everything
baseurl=http://mirrors.bclinux.org/bclinux/oe21.10/everything/$basearch/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-For-Euler
[update]
name=BC-Linux-release - update
baseurl=http://mirrors.bclinux.org/bclinux/oe21.10/update/$basearch/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-For-Euler
[extras]
name=BC-Linux-release - extras
baseurl=http://mirrors.bclinux.org/bclinux/oe21.10/extras/$basearch/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-BCLinux-For-Euler
# centos8源,安装docker时需要
[root@bclinux-11 ~]# cat /etc/yum.repos.d/centos8.repo 
[BaseOS]
name=CentOS-8-stream - Base - repo.huaweicloud.com
baseurl=https://repo.huaweicloud.com/centos/8-stream/BaseOS/$basearch/os/
#mirrorlist=https://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=BaseOS&infra=$infra
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/centos/RPM-GPG-KEY-CentOS-Official 
#released updates 
[AppStream]
name=CentOS-8-stream - AppStream - repo.huaweicloud.com
baseurl=https://repo.huaweicloud.com/centos/8-stream/AppStream/$basearch/os/
#mirrorlist=https://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=AppStream&infra=$infra
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/centos/RPM-GPG-KEY-CentOS-Official
[PowerTools]
name=CentOS-8-stream - PowerTools - repo.huaweicloud.com
baseurl=https://repo.huaweicloud.com/centos/8-stream/PowerTools/$basearch/os/
#mirrorlist=https://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=PowerTools&infra=$infra
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/centos/RPM-GPG-KEY-CentOS-Official
#additional packages that may be useful
[extras]
name=CentOS-8-stream - Extras - repo.huaweicloud.com
baseurl=https://repo.huaweicloud.com/centos/8-stream/extras/$basearch/os/
#mirrorlist=https://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=https://repo.huaweicloud.com/centos/RPM-GPG-KEY-CentOS-Official
# docker源
[root@bclinux-11 ~]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
# k8s源
[root@bclinux-11 ~]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

3、部署

3.1 docker及cri-docker安装及配置 - 所有主机

​ k8s在1.24版本后,开始不支持接入docker,仅接入使用CRI 标准接口的容器编排器,目前比较主流程的替代品是containerd。containerd相关的命令参数与docker基本一样,唯一的区别就是无法构建镜像,所以还得使用docker构建镜像;但是docker维护了一个cri-docker的项目,主要为了满足k8s的CRI标准。

3.1.1 docker

# 安装docker
[root@bclinux-11 ~]# yum install docker-ce -y
# 配置docker
[root@bclinux-11 ~]# cat >/etc/docker/daemon.json<<-"EOF"
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "insecure-registries":["test.harbor.org:18080"],
  "data-root": "/home/docker_data",
  "log-driver":"json-file",
  "log-opts": {"max-size" :"50m","max-file":"1"},
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF
# 启动并设置开机自启
[root@bclinux-11 ~]# systemctl start docker && systemctl enable docker && systemctl status docker

3.1.2 cri-dockerd

# 安装
[root@bclinux-11 ~]# yum install -y https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el8.x86_64.rpm
# 配置
[root@bclinux-11 ~]# egrep -v '^#|^$' /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image registry.aliyuncs.com/google_containers/pause:3.9 --ipv6-dual-stack
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
# 启动并设置开机自启动
[root@bclinux-11 ~]# systemctl start cri-docker.service && systemctl enable cri-docker.service && systemctl status cri-docker.service

3.2 containerd部署 - 所有主机

  • 如果部署了docker就不需要部署contained,俩者二选一;为什么有了cri-dockerd,还要用containerd

    • containerd好处
    1. 在调用时比docker少了俩层,比docker更快
    2. 比docker更安全
    • containerd缺点(不一定是缺点)
    1. 不支持制作镜像,做镜像还得用docker或者podman
    2. 操作命令相比较docker复杂一点
  • 调用示意图

docker containerd
K8s K8s
Docker-shim
Docker
Containerd Containerd
Containers Containers

3.2.1 安装containerd

[root@bclinux ~]# tar xf cri-containerd-1.7.2-linux-amd64.tar.gz -C /
# 配置containerd
[root@bclinux ~]# mkdir /etc/containerd -p && containerd config default > /etc/containerd/config.toml
# 2条配置修改:
# 65行:sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8" 【修改pause容器镜像为国内】
# 137行:SystemdCgroup = true 【让Runc使用system cgroup驱动,对容器进行资源划分,隔离。】
[root@bclinux ~]# vim /etc/containerd/config.toml # 添加私有harbor仓库
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."172.168.80.14:18080".auth]
          username = "admin"
          password = "Harbor12345"
      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."172.168.80.14:18080"]
          endpoint = ["http://172.168.80.14:18080"]
[root@bclinux ~]# systemctl start containerd.service && systemctl enable containerd.service && systemctl status containerd.service

3.3 nginx部署 - 2台master节点

# 下载依赖包
[root@bclinux-11 ~]# yum -y install gcc automake autoconf libtool make gcc-c++ openssl openssl-devel zlib-devel pcre-devel zlib pcre libxslt-devel  perl-devel
# 编译nginx
[root@bclinux-11 ~]# tar xf nginx-1.24.0.tar.gz
[root@bclinux-11 nginx-1.24.0]# ./configure --prefix=/data/nginx --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' --with-ipv6
..............................
  nginx path prefix: "/data/nginx"
  nginx binary file: "/data/nginx/sbin/nginx"
  nginx modules path: "/data/nginx/modules"
  nginx configuration prefix: "/data/nginx/conf"
  nginx configuration file: "/data/nginx/conf/nginx.conf"
  nginx pid file: "/data/nginx/logs/nginx.pid"
  nginx error log file: "/data/nginx/logs/error.log"
  nginx http access log file: "/data/nginx/logs/access.log"
  nginx http client request body temporary files: "client_body_temp"
  nginx http proxy temporary files: "proxy_temp"
  nginx http fastcgi temporary files: "fastcgi_temp"
  nginx http uwsgi temporary files: "uwsgi_temp"
  nginx http scgi temporary files: "scgi_temp"

./configure: warning: the "--with-ipv6" option is deprecated # 提示添加了ipv6模块
[root@bclinux-11 nginx-1.24.0]# make && make install
# 配置启动
[root@bclinux-11 nginx-1.24.0]# cat >/usr/lib/systemd/system/nginx.service<<-"EOF"
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/data/nginx/logs/nginx.pid
ExecStart=/data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat /data/nginx/logs/nginx.pid)"
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat /data/nginx/logs/nginx.pid)"
[Install]
WantedBy=multi-user.target
EOF
# 配置代理
[root@bclinux-11 ~]# cat /data/nginx/conf/nginx.conf

user  root;
worker_processes  auto;

error_log  logs/error.log;
error_log  logs/error.log  notice;
error_log  logs/error.log  info;

pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

    sendfile        on;
    keepalive_timeout  65;
}
stream {
    log_format proxy '$remote_addr [$time_local] '
               '$protocol $status $bytes_sent $bytes_received '
               '$session_time -> $upstream_addr '
               '$upstream_bytes_sent $upstream_bytes_received $upstream_connect_time';
    access_log logs/tcp-access.log proxy;
    upstream kube-apiservers {
        hash  $remote_addr consistent;
        server [1:2:3:4::11]:6443 weight=6 max_fails=1 fail_timeout=10s;
        server [1:2:3:4::12]:6443 weight=6 max_fails=1 fail_timeout=10s;
    }
    server {
        listen [::]:8443 ipv6only=on;
        proxy_connect_timeout 30s;
        proxy_timeout 60s;
        proxy_pass kube-apiservers;
    }
}
# 启动并设置开机自启
[root@bclinux-11 nginx-1.24.0]# systemctl start nginx && systemctl enable nginx && systemctl status nginx
  • nginx修改banner值
# 修改变量 
cat src/core/nginx.h
#define NGINX_VERSION      ""
#define NGINX_VER          "流氓兔/" NGINX_VERSION

#ifdef NGX_BUILD
#define NGINX_VER_BUILD    NGINX_VER " (" NGX_BUILD ")"
#else
#define NGINX_VER_BUILD    NGINX_VER
#endif

#define NGINX_VAR          ""
#define NGX_OLDPID_EXT     ".oldbin"

3.4 keepalived部署 - 2台master节点

[root@bclinux-11 ~]# yum install keepalived -y
[root@bclinux-11 keepalived]# tee >/etc/keepalived/keepalived.conf << "EOF"
! Configuration File for keepalived

global_defs {
   router_id k8s
}

vrrp_script check_ng {
    script "/etc/keepalived/script/check_ng.sh"
    interval 3
    weight -2
}

vrrp_instance VI_1 {
    state MASTER # 备节点修改成BACKUP
    interface ens160
    virtual_router_id 351
    priority 200 # 备节点修改成150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ceb1b3ec013d66163d6ab
    }
    unicast_src_ip 172.168.80.11 # 修改成备节点的ip
    unicast_peer { 
      172.168.80.12 # 修改成主节点的ip
    }
    virtual_ipaddress {
        172.168.80.10
    }
    virtual_ipaddress_excluded {
        1:2:3:4::10
    }
    track_script {
        check_ng
    }
}
EOF
[root@bclinux-11 keepalived]# mkdir -p /etc/keepalived/script/ && tee >/etc/keepalived/script/check_ng.sh << "EOF" && chmod +x /etc/keepalived/script/check_ng.sh
#!/bin/bash
nginx_num=`ps -ef|grep [n]ginx|wc -l`
pid_file='/data/nginx/logs/nginx.pid'
if [ "${nginx_num}" != 0 -a -f /data/nginx/logs/nginx.pid ];then
  exit 0
else
  systemctl stop keepalived && exit 1
fi
EOF
[root@bclinux-11 keepalived]# systemctl start keepalived.service  && systemctl enable keepalived.service && systemctl status keepalived.service

3.5 etcd部署 - 三个节点

​ Etcd 是一个分布式键值存储系统,k8s使用Etcd进行数据存储,kubeadm搭建默认情况下只启动一个Etcd Pod,存在单点故障,生产环境强烈不建议,所以我们这里使用3台服务器组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

  • 为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。

  • 如果是单节点k8s,可用内置etcd ,也可用外置etcd

3.5.1 准备cfssl证书生成工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.5.2 生成etcd证书

# 自签证书颁发工具机构(CA)
# 创建工作目录:
[root@bclinux-11 ~]# mkdir -p ~/etcd_tls
[root@bclinux-11 ~]# cd ~/etcd_tls
# 自签CA:
[root@bclinux-11 etcd_tls]# cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
[root@bclinux-11 etcd_tls]# cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
# 生成证书:
[root@bclinux-11 etcd_tls]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/05/11 16:36:24 [INFO] generating a new CA key and certificate from CSR
2021/05/11 16:36:24 [INFO] generate received request
2021/05/11 16:36:24 [INFO] received CSR
2021/05/11 16:36:24 [INFO] generating key: rsa-2048
2021/05/11 16:36:24 [INFO] encoded CSR
2021/05/11 16:36:24 [INFO] signed certificate with serial number 359821359061850962149376009879415970209566630594
[root@bclinux-11 etcd_tls]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
# 会生成ca.pem和ca-key.pem文件。
# 使用自签CA签发etcd https正式
# 创建证书申请文件:
[root@bclinux-11 etcd_tls]# cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "1:2:3:4::11",
    "1:2:3:4::12",
    "1:2:3:4::13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
# 注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,要修改成自己的,一个都不能少!为了方便后期扩容可以多写几个预留的IP;如果要修改为ipv6调用,则把ip写成ipv6地址。
# 生成证书:
[root@bclinux-11 etcd_tls]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2021/05/11 16:38:34 [INFO] generate received request
2021/05/11 16:38:34 [INFO] received CSR
2021/05/11 16:38:34 [INFO] generating key: rsa-2048
2021/05/11 16:38:35 [INFO] encoded CSR
2021/05/11 16:38:35 [INFO] signed certificate with serial number 686508384315413943399934921027716200577151171267
2021/05/11 16:38:35 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@bclinux-11 etcd_tls]# ls server*
server.csr  server-csr.json  server-key.pem  server.pem
# 会生成server.pem和server-key.pem文件。

3.5.3 部署etcd

  • 下载地址:https://github.com/etcd-io/etcd/releases

  • 以下在一台上操作,操作完成将节点1生成的所有文件拷贝到节点2和节点3;如配置ipv6则把下面配置得ipv4地址换成ipv6即可。

# 创建工作目录并解压二进制文件-1
[root@bclinux-11 ~]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@bclinux-11 ~]# tar zxvf etcd-v3.5.6-linux-amd64.tar.gz
[root@bclinux-11 ~]# mv etcd-v3.5.6-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
# 创建etcd配置文件-1
[root@bclinux-11 ~]# cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://[1:2:3:4::11]:2380"
ETCD_LISTEN_CLIENT_URLS="https://[1:2:3:4::11]:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://[1:2:3:4::11]:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://[1:2:3:4::11]:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://[1:2:3:4::11]:2380,etcd-2=https://[1:2:3:4::12]:2380,etcd-3=https://[1:2:3:4::13]:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# ETCD_NAME:节点名称,集群中唯一
# ETCDDATADIR:数据目录
# ETCDLISTENPEER_URLS:集群通信监听地址
# ETCDLISTENCLIENT_URLS:客户端访问监听地址
# ETCDINITIALADVERTISEPEERURLS:集群通告地址
# ETCDADVERTISECLIENT_URLS:客户端通告地址
# ETCDINITIALCLUSTER:集群节点地址
# ETCDINITIALCLUSTER_TOKEN:集群Token
# ETCDINITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
# systemd管理etcd-1
[root@bclinux-11 ~]# cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
# 拷贝刚才生成的证书-1
[root@bclinux-11 ~]# cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/

3.5.4 将节点1生成的所有文件拷贝到节点2和节点3

[root@bclinux-11 ~]# scp -r /opt/etcd/ root@[1:2:3:4::12]:/opt/
[root@bclinux-11 ~]# scp /usr/lib/systemd/system/etcd.service root@[1:2:3:4::12]:/usr/lib/systemd/system/
[root@bclinux-11 ~]# scp -r /opt/etcd/ root@[1:2:3:4::13]:/opt/
[root@bclinux-11 ~]# scp /usr/lib/systemd/system/etcd.service root@[1:2:3:4::13]:/usr/lib/systemd/system/
# 然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
[root@bclinux-11 ~]# cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"                                            # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://[1:2:3:4::11]:2380"            # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://[1:2:3:4::11]:2379"          # 修改此处为当前服务器IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://[1:2:3:4::11]:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://[1:2:3:4::11]:2379"       # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://[1:2:3:4::11]:2380,etcd-2=https://[1:2:3:4::12]:2380,etcd-3=https://[1:2:3:4::13]:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# 启动etcd
[root@bclinux-11 ~]# systemctl daemon-reload && systemctl start etcd && systemctl enable etcd && systemctl status etcd
# 注:etcd节点要一起启动
# ansible操作
ansible etcd -m service -a 'name=etcd state=started enabled=yes daemon-reload=yes'
# 注:etcd是在ansible里做的分组,这个组下面只有etcd这三台的主机ip

3.5.5 查看集群状态

[root@bclinux-11 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://[1:2:3:4::11]:2379,https://[1:2:3:4::12]:2379,https://[1:2:3:4::13]:2379" endpoint health --write-out=table
+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://[1:2:3:4::11]:2379 |   true | 17.543566ms |       |
| https://[1:2:3:4::13]:2379 |   true | 18.424848ms |       |
| https://[1:2:3:4::12]:2379 |   true | 18.384393ms |       |
+----------------------------+--------+-------------+-------+
# 如果输出上面信息,就说明集群部署成功。
# 如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

3.6 k8s安装 - 所有主机

[root@bclinux-11 ~]# yum install kubeadm-1.27.4 kubelet-1.27.4 kubectl-1.27.4 -y
[root@bclinux-11 ~]# systemctl enable kubelet.service

3.6.1 修改k8s证书默认1年到期

  • kubeadm部署的k8s除ca证书是10年,其他均是1年,如想修改默认,需要对kubeadm的源码进行修改并重新打包。
  • 下载地址:https://github.com/kubernetes/kubernetes/releases
  • 本次安装因为测试用,就不修改了,如想修改,参考下文
# 修改默认证书时间
[root@bclinux-11 ~]# vim kubernetes-1.20.10/staging/src/k8s.io/client-go/util/cert/cert.go
 65                 NotBefore:             now.UTC(),
 66                 NotAfter:              now.Add(duration365d * 100).UTC(),
 # 此处修改的ca正式到期时间,把10改为100即可
[root@bclinux-11 ~]# vim kubernetes-1.20.10/cmd/kubeadm/app/constants/constants.go
 49         CertificateValidity = time.Hour * 24 * 365 * 100
 # 此处是修改其他证书到期时间,增加*100

3.6.2 安装go环境

[root@bclinux-11 ~]# cat kubernetes-1.20.10/build/build-image/cross/VERSION
v1.15.15-legacy-1
# 安装go环境
[root@bclinux-11 ~]# yum install gcc make rsync jq -y
[root@bclinux-11 ~]# wget https://dl.google.com/go/go1.15.15.linux-amd64.tar.gz
[root@bclinux-11 ~]# tar zxvf go1.15.linux-amd64.tar.gz  -C /usr/local
[root@bclinux-11 ~]# tee >> /etc/profile <<-"EOF"
export GOROOT=/usr/local/go
export GOPATH=/usr/local/gopath
export PATH=$PATH:$GOROOT/bin
EOF
[root@bclinux-11 ~]# source /etc/profile
[root@bclinux-11 ~]# go version
go version go1.15.15 linux/amd64
# 编译kubeadm
[root@bclinux-11 ~]# cd kubernetes-1.20.10/
[root@bclinux-11 kubernetes-1.20.10]# make all WHAT=cmd/kubeadm GOFLAGS=-v
# 替换原有的kubeadm-所有主机
[root@bclinux-11 kubernetes-1.20.10]# ansible k8s-all -m copy -a "src=./_output/local/bin/linux/amd64/kubeadm dest=/usr/bin/kubeadm backup=yes"
# ./_output/local/bin/linux/amd64/kubeadm 是编译完输出目录

3.7 k8s部署

3.7.1 初始化 - 1台master节点

3.7.1.1 单节点初始化
  • 列出初始化的yaml文件
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration # 初始Master节点的私有配置 
bootstrapTokens:        # 可以指定bootstrapToken,默认24小过期自动删除
- token: "9a08jv.c0izixklcxtmnze7"
  description: "kubeadm bootstrap token"
  ttl: "24h"
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"      # 可以指定certificateKey,默认两小时过期自动删除
localAPIEndpoint:
  advertiseAddress: "1:2:3:4::11"    # master节点ip
nodeRegistration:
  criSocket: /var/run/cri-dockerd.sock
  name: master
  kubeletExtraArgs:
    node-ip: "1:2:3:4::11,172.168.80.11"    # master节点ip

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration      # 所有Master节点的公共配置
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.27.4
controlPlaneEndpoint: "[1:2:3:4::11]:6443"    # api-server地址,建议用域名
networking:
  podSubnet: 172:244::/64,172.244.0.0/16    # ipv4放在前面,那么kubectl get node时显示的是ipv4地址
  serviceSubnet: 172:96::/112,172.96.0.0/18    # ipv4放在前面,那么kubectl get service时显示的是ipv4地址
etcd:
  local:
    dataDir: "/home/etcd_data"
    extralArgs:
      listen-metrics-urls: http://[::]:2381

apiServer:
  certSANs: 
  - "1:2:3:4::11"
  - "172.168.80.11"
  - "bclinux-11"
  - "172.96.0.1"
  - "172:96::1"
  extraArgs:
    service-cluster-ip-range: 172:96::/112,172.96.0.0/18
    bind-address: "::"
    secure-port: "6443"
scheduler:
  extraArgs:
    bind-address: "::"
controllerManager:
  extraArgs:
    bind-address: "::"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
cgroupDriver: systemd
healthzBindAddress: "::"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "172:244::/64,172.244.0.0/16"    # Pod的地址范围
mode: "ipvs"
3.7.1.2 集群初始化
[root@bclinux-11 ~]# cat kube-init.yaml 
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration # 初始Master节点的私有配置 
bootstrapTokens:        # 可以指定bootstrapToken,默认24小过期自动删除
- token: "9a08jv.c0izixklcxtmnze7"
  description: "kubeadm bootstrap token"
  ttl: "24h"
certificateKey: "e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"      # 可以指定certificateKey,默认两小时过期自动删除
localAPIEndpoint:
  advertiseAddress: "1:2:3:4::11"    # master节点ip
nodeRegistration:
  criSocket: /var/run/cri-dockerd.sock
  name: bclinux-11
  kubeletExtraArgs:
    node-ip: "1:2:3:4::11,172.168.80.11"    # master节点ip

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration      # 所有Master节点的公共配置
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.27.4
controlPlaneEndpoint: "[1:2:3:4::10]:8443"    # api-server地址,建议用域名
networking:
  podSubnet: 172:244::/64,172.244.0.0/16    # ipv4放在前面,那么kubectl get node时显示的是ipv4地址
  serviceSubnet: 172:96::/112,172.96.0.0/18    # ipv4放在前面,那么kubectl get service时显示的是ipv4地址
etcd:
  external:  # 使用外部etcd
    endpoints:
    - https://[1:2:3:4::11]:2379 # etcd集群3个节点
    - https://[1:2:3:4::12]:2379 # etcd集群3个节点
    - https://[1:2:3:4::13]:2379 # etcd集群3个节点
    caFile: /data/etcd/ssl/ca.pem # 连接etcd所需证书
    certFile: /data/etcd/ssl/server.pem
    keyFile: /data/etcd/ssl/server-key.pem

apiServer:
  certSANs: 
  - "1:2:3:4::11"
  - "1:2:3:4::12"
  - "172.168.80.11"
  - "172.168.80.12"
  - "bclinux-11"
  - "bclinux-12"
  - "172.96.0.1"
  - "172:96::1"
  extraArgs:
    service-cluster-ip-range: 172:96::/112,172.96.0.0/18
    bind-address: "::"
    secure-port: "6443"
scheduler:
  extraArgs:
    bind-address: "::"
controllerManager:
  extraArgs:
    bind-address: "::"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
cgroupDriver: systemd
healthzBindAddress: "::"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "172:244::/64,172.244.0.0/16"    # Pod的地址范围
mode: "ipvs"
[root@bclinux-11 ~]# kubeadm init --config kube-init-1.27.yaml --upload-certs
........................................
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join [1:2:3:4::10]:8443 --token 9a08jv.c0izixklcxtmnze7 \
	--discovery-token-ca-cert-hash sha256:71818aa4d010d77aa3f0864c04415da65db936e68d598710296c2bf38104e4cf \
	--control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join [1:2:3:4::10]:8443 --token 9a08jv.c0izixklcxtmnze7 \
	--discovery-token-ca-cert-hash sha256:71818aa4d010d77aa3f0864c04415da65db936e68d598710296c2bf38104e4cf
[root@bclinux-11 ~]# mkdir -p $HOME/.kube
[root@bclinux-11 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@bclinux-11 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.7.2 添加master节点 - 剩余master节点

[root@bclinux-12 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--node-ip 1:2:3:4::12,172.168.80.12 --fail-swap-on=false"
[root@bclinux-12 ~]# kubeadm join [1:2:3:4::10]:8443 --token 9a08jv.c0izixklcxtmnze7 --discovery-token-ca-cert-hash sha256:71818aa4d010d77aa3f0864c04415da65db936e68d598710296c2bf38104e4cf --control-plane --certificate-key e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204 --cri-socket=/var/run/cri-dockerd.sock
..........................
To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[root@bclinux-12 ~]# kubectl get pod -n kube-system|grep api
kube-apiserver-bclinux-11            1/1     Running            0             15h
kube-apiserver-bclinux-12            0/1     CrashLoopBackOff   5 (16s ago)   3m38s # 此pod有问题
# 修改这个节点apiserver的静态文件
[root@bclinux-12 ~]# grep '\-\-advertise-address' /etc/kubernetes/manifests/kube-apiserver.yaml 
    - --advertise-address=172.168.80.12 # 换成ipv6的ip
[root@bclinux-12 ~]# kubectl get pod -n kube-system|grep api
kube-apiserver-bclinux-11            1/1     Running   0          15h
kube-apiserver-bclinux-12            1/1     Running   0          2m42s

3.7.3 添加node - 所以node节点

[root@bclinux-13 ~]# cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--node-ip 1:2:3:4::13,172.168.80.13 --fail-swap-on=false"
[root@bclinux-13 ~]# kubeadm join [1:2:3:4::10]:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:d63a4cb2d94ee1fb7f882f14cae74f5ded7b0f187dc778130b9f3005168ed8cb --cri-socket=/var/run/cri-dockerd.sock 
.................................................
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.7.4 部署CNI网络 - 任意一台master节点

# 修改calico配置并关闭IPIP使用BGP,在同网络环境下使用qperf命令分别测试calico的BGP、IPIP、VXLAN三种模式损耗大概为:bgp损耗10%、ipip损耗30%、vxlan损耗80%;推荐使用BGP模式,但是BGP模式对网络环境有要求(网络搞不明白,可自行查找资料)
[root@bclinux-11 ~]# vim calico-3.26.0.yaml
              "type": "calico-ipam",
              "assign_ipv4": "true", # 增加
              "assign_ipv6": "true"  # 增加
........................
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP6
              value: "autodetect"
            # 开启ipv6的nat出口
            - name: CALICO_IPV6POOL_NAT_OUTGOING
              value: "true"
            # 选择calico绑定的网卡,如果v4和v6在俩个网卡上,则使用 IP4_AUTODETECTION_METHOD 和 IP6_AUTODETECTION_METHOD 分别配置
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens160"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Never" # 关闭ipip
            - name: CALICO_IPV6POOL_IPIP
              value: "Never" # 关闭ipip
            # Enable or Disable VXLAN on the default IP pool.
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Enable or Disable VXLAN on the default IPv6 IP pool.
            - name: CALICO_IPV6POOL_VXLAN
              value: "Never"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "true" # 将false改为true
[root@bclinux-11 ~]# kubectl apply -f calico-3.26.0.yaml
[root@bclinux-11 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-786b679988-vm8bb   1/1     Running   0               9m42s
calico-node-d79wc                          1/1     Running   0               7m10s
calico-node-hvlwh                          1/1     Running   0               9m42s
calico-node-xt8lb                          1/1     Running   0               9m42s

3.7.5 metrics-server安装 - 任意一台master节点

[root@bclinux-11 ~]# kubectl apply -f components.yaml
[root@bclinux-11 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-6467f9696d-926st            1/1     Running   0              35s
[root@bclinux-11 ~]# kubectl top node
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
bclinux-11   636m         31%    782Mi           57%       
bclinux-12   685m         34%    811Mi           59%       
bclinux-13   109m         5%     703Mi           51%

3.7.6 验证集群 - 任意一台master节点

# 创建deployment类型的pod,保证每个节点运行一个
[root@bclinux-11 ~]# kubectl create deployment nginx --image nginx:1.20.0 --replicas 3
# 常见nodeport类型的svc网络
[root@bclinux-11 ~]# kubectl expose deployment nginx --target-port 80 --port 80 --type NodePort
# 查看pod地址和svc地址
[root@bclinux-11 ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP                             NODE         NOMINATED NODE   READINESS GATES
pod/nginx-7cf478bb58-7bmpm   1/1     Running   0          85s   172:244::390e:eaa8:1a33:8703   bclinux-12   <none>           <none>
pod/nginx-7cf478bb58-rb4fz   1/1     Running   0          85s   172:244::be3f:5e5d:bf32:cf80   bclinux-13   <none>           <none>
pod/nginx-7cf478bb58-xl4zr   1/1     Running   0          85s   172:244::1f75:c21e:b2f9:de81   bclinux-11   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes   ClusterIP   172:96::1      <none>        443/TCP        16h   <none>
service/nginx        NodePort    172:96::11a8   <none>        80:30587/TCP   12s   app=nginx
# 分别curl容器地址、svc地址、节点地址看看是否都可以curl通
[root@bclinux-11 ~]# curl -I6 [172:244::390e:eaa8:1a33:8703] 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [172:244::be3f:5e5d:bf32:cf80] 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [172:244::1f75:c21e:b2f9:de81] 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [172:96::11a8] 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [1:2:3:4::13]:30587 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [1:2:3:4::12]:30587 2>&1|grep HTTP
HTTP/1.1 200 OK
[root@bclinux-11 ~]# curl -I6 [1:2:3:4::11]:30587 2>&1|grep HTTP
HTTP/1.1 200 OK

标签:11,bclinux,BC,--,Linux,docker,K8S,root,节点
From: https://www.cnblogs.com/wang-jc/p/17584565.html

相关文章

  • [glibc2.23源码]阅读源码&调试,找出free_hook-0x13分配失败的原因
    0x00写在前面2023.7.27早合肥本次阅读源码是本人第一次,算是一个全新的开始。本次看源码是为了调试roarctf的babyheap那道题目,wp写在独奏者2序章那篇的0x04,为了看看为什么free_hook-0x13不能分配堆。0x01阅读前言和别名搜索aliasweak_alias(__malloc_info,malloc_info......
  • k8s中如何固定一个pod的IP地址?该集群网络插件是calico
    1、首先查看calico的CIDR地址范围[root@nccztsjb-node-17~]#calicoctlgetippoolNAMECIDRSELECTORdefault-pool172.23.0.0/16all() 2、然后呢,在这个地址范围内,给pod选择一个固定的IP地址比如:172.23.45.27 通过在pod中加入annotat......
  • linux常用内存相关命令总结
    查看某个pid占用物理内存的峰值 cat/proc/pid/status|grep-E"VmHWM|VmRSS"参考信息:(23条消息)Linux下查看某一进程占用的内存_Jeremy_Lee123的博客-CSDN博客 内核内存泄漏常用工具kmemleakKmemleak是Linux内核提供的一个内存泄漏检测工具(内核3.1.5之后得版本支......
  • pgsql备份工具:pg_rman在Linux下的安装、设置与使用
    https://blog.csdn.net/Absurdreal/article/details/128872628?spm=1001.2101.3001.6650.8&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7ERate-8-128872628-blog-128017299.235%5Ev38%5Epc_relevant_sort&depth_1-utm_sou......
  • 学好Linux的必经之路
    学好Linux的必经之路学习动机的培养对于一个人学习习惯的形成有着重要的作用。当我们在学习某一个事物时,建立属于我们自己的学习方法,以此培养我们学习Linux系统的学习动机。当前,Linux系统属于热门的计算机操作系统,因此学习Linux显得重要起来。同时,学习Linux可以让我们在计算机领域......
  • 智能制造之路—从0开始打造一套轻量级MOM平台之基础平台搭建(Linux部署)
    一、前言前面我们选定了Admin.net来搭建我们的MOM快速开发平台,本章主要描述.NET6平台的Linux部署,以及记录搭建过程中坑。本次搭建我们选择某云的轻量应用服务器,系统选择CentOS7.6,数据库使用Mysql。参考配置如下: 二、搭建Linux管理工具系统搭建完毕,我们使用宝塔来管理linux......
  • 正点原子Ubuntu入门012---Linux C编程
    一、编写C语言程序Ubuntu中编写和编译是分开的,一般使用vim编辑器编写程序,或者使用vscode编写;使用gcc进行编译设置vim编辑器,一个Tab=4字节使用vi打开文件/etc/vim/vimrc,在此文件最后输入以下代码setts=4  设置vim编辑器,显示行号 测试案例:1#include......
  • Linux之引导和服务
    目录1.1Linux的组成1.1Linux的组成kernel内核rootfs包括程序和glibc库操作系统存储在硬盘光驱或U盘网络的远端机器GRUB统一启动加载器加载操作系统加电自检检测硬件是否有故障如果无故障就去bios中设置的第一个启动项找操作系统第一启动是硬盘,MBR引导第一个......
  • 第一章 Linux系统编程
    Linux基础命令ctrl+l//快速清屏rm文件名//删除文件,rm*.o表示删除所有.o后缀的文件mkdir目录名//创建一个目录touch文件名//创建一个文件tree//查看文件目录树,但要sudoaotinstalltreell//查看所有文件cp–......
  • kernel源码(二十三)Bochs运行linux0.11
    1下载Bochshttps://sourceforge.net/projects/bochs/我下载的版本为Bochs-win64-2.7.exe,双击即可安装2运行linux0.112.1不使用配置文件http://oldlinux.org/Linux.old/images/下载如下两个镜像bootimage-0.11-20040305操作系统引导镜像,包含了操作系统启动引导程序和......