首页 > 其他分享 >K8S云原生-高可用集群部署V1.28.2

K8S云原生-高可用集群部署V1.28.2

时间:2024-08-06 17:06:25浏览次数:7  
标签:原生 10.1 kubernetes kubelet -- etc V1.28 docker K8S

一、环境准备

K8S集群角色 IP 主机名 安装相关组件
master 10.1.16.160 hqiotmaster07l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
master 10.1.16.161 hqiotmaster08l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
master 10.1.16.162 hqiotmaster09l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
worker 10.1.16.163 hqiotnode12l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
worker 10.1.16.164 hqiotnode13l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
worker 10.1.16.165 hqiotnode14l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
vip 10.1.16.202   nginx、keeplived

1.1、服务器环境初始化

# 控制节点、工作节点都需要安装
# 1、修改主机名:对应主机名修改
hostnamectl set-hostname master && bash

# 2、添加hosts
cat << EOF >  /etc/hosts 
10.1.16.160 hqiotmaster07l
10.1.16.161 hqiotmaster08l
10.1.16.162 hqiotmaster09l
10.1.16.163 hqiotnode12l
10.1.16.164 hqiotnode13l
10.1.16.165 hqiotnode14l
EOF

# 3、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 4、关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config

# 5、关闭交换分区 swapoff -a # 临时关闭 永久关闭
vi /etc/fstab
#注释这一行:/mnt/swap swap swap defaults 0 0
free -m
查看swap是否全为0


# 6、每台机器都设置 时间同步
yum install chrony -y
systemctl start chronyd && systemctl enable chronyd
chronyc sources

 

# 7、创建/etc/modules-load.d/containerd.conf配置文件:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
执行以下命令使配置生效:
modprobe overlay
modprobe br_netfilter

 

# 8、将桥接的IPv4流量传递到iptables的链
cat << EOF > /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

 

# 9、配置服务器支持开启ipvs的前提条件(如果用istio,请不要开启IPVS模式)

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm


由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在各个服务器节点上执行以下脚本:

cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

赋权:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。

 

如果报错modprobe: FATAL: Module nf_conntrack_ipv4 not found.

这是因为使用了高内核,较如博主就是使用了5.2的内核,一般教程都是3.2的内核。在高版本内核已经把nf_conntrack_ipv4替换为nf_conntrack了。所以正确配置应该如下

在各个服务器节点上执行以下脚本:

cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

赋权:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

 

# 10、生效sysctl
sysctl --system 

二、基础软件包安装

yum install -y gcc gcc-c++ make

yum install wget net-tools vim* nc telnet-server telnet curl openssl-devel libnl3-devel net-snmp-devel zlib zlib-devel pcre-devel openssl openssl-devel

# 修改linux命令历史记录、ssh关闭时间
vi /etc/profile
HISTSIZE=3000
TMOUT=3600
   
退出保存,执行
source /etc/profile

三、Docker安装

安装yum的工具包集合
yum install -y yum-utils

安装docker仓库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

卸载docker-ce
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

yum list installed | grep docker
yum remove -y docker-ce.x86_64

rm -rf /var/lib/docker
rm -rf /etc/docker/

查看可安装版本
yum list docker-ce --showduplicates | sort -r
安装最新版本
yum -y install docker-ce

安装特定版本的docker-ce:
yum -y install docker-ce-23.0.3-1.el7

启动docker,并设为开机自启动
systemctl enable docker && systemctl start docker

/etc/docker上传daemon.json
systemctl daemon-reload
systemctl restart docker.service
docker info

docker相关命令:
systemctl stop docker
systemctl start docker
systemctl enable docker
systemctl status docker
systemctl restart docker
docker info
docker --version
containerd --version

四、containerd安装

下载Containerd的二进制包:
可先在网络可达的机器上下载好,再上传到服务器

wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz

压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。 
将解压缩到系统的根目录中:
tar -zvxf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /

注意经测试cri-containerd-cni-1.7.14-linux-amd64.tar.gz包中包含的runc在CentOS 7下的动态链接有问题,
这里从runc的github上单独下载runc,并替换上面安装的containerd中的runc:
wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64

install -m 755 runc.amd64 /usr/sbin/runc
runc -v

runc version 1.1.10
commit: v1.1.10-0-g18a0cb0f
spec: 1.0.2-dev
go: go1.20.10
libseccomp: 2.5.4

接下来生成containerd的配置文件:
rm -rf /etc/containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

根据文档 Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。
修改前面生成的配置文件/etc/containerd/config.toml

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

# 设置aliyun地址,不设置会连接不上
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

# 设置Harbor私有仓库
vi /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.configs]
  [plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".tls]
    insecure_skip_verify = true
  [plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".auth]
    username = "admin"
    password = "Harbor12345"
    
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
    endpoint = ["https://registry.aliyuncs.com/google_containers"]

  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.1.1.167"]
    endpoint = ["https://10.1.1.167"]

# 配置containerd开机启动,并启动containerd
systemctl daemon-reload
systemctl enable --now containerd && systemctl restart containerd

# 使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:
crictl version

Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.14
RuntimeApiVersion:  v1

五、安装配置kubernetes

5.1 kubernetes高可用方案

为了能很好的讲解Kubernetes集群的高可用配置,我们可以通过一下方案来解答。

在这个方案中,我们通过keepalive+nginx实现k8s apiserver组件高可用。

按照旧的方案,我们以某一个master节点作为主节点,让其余的两个master节点加入,是无法达到集群的高可用的。一旦主master节点宕机,整个集群将处于不可用的状态。

5.2 通过keepalive+nginx实现k8s apiserver高可用

三台master节点,Nginx安装与配置

yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

tar -zvxf nginx-1.27.0.tar.gz

cd nginx-1.27.0

全量安装
./configure --prefix=/usr/local/nginx --with-stream --with-http_stub_status_module --with-http_ssl_module

make & make install

ln -s /usr/local/nginx/sbin/nginx /usr/sbin/

nginx -v

cd /usr/local/nginx/sbin/
#启动服务
./nginx
#停止服务
./nginx -s stop
#查看80端口
netstat -ntulp |grep 80

建立服务,启动服务方式
vi /usr/lib/systemd/system/nginx.service

[Unit]
Description=nginx - high performance web server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop

[Install]
WantedBy=multi-user.target

上传nginx.service 到  /usr/lib/systemd/system
systemctl daemon-reload
systemctl start nginx.service && systemctl enable nginx.service
systemctl status nginx.service

修改nginx 配置文件

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
error_log  /var/log/nginx/error.log;
#error_log  logs/error.log  info;

pid        /var/log/nginx/nginx.pid;


events {
    worker_connections  1024;
}

stream { 
 
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; 
 
    access_log /var/log/nginx/k8s-access.log main; 
 
    upstream k8s-apiserver { 
            server 10.1.16.169:6443 weight=5 max_fails=3 fail_timeout=30s;   
            server 10.1.16.170:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 10.1.16.171:6443 weight=5 max_fails=3 fail_timeout=30s;   
 
    }
    server { 
        listen 16443; # 由于 nginx 与 master 节点复用,这个监听端口不能是 6443,否则会冲突 
        proxy_pass k8s-apiserver; 
    }

}

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    #gzip  on;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }

        error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

重启nginx.service
systemctl restart nginx.service

三台master节点,Keepalived安装与配置

yum install -y curl gcc openssl-devel libnl3-devel net-snmp-devel

yum install -y keepalived

cd /etc/keepalived/

mv keepalived.conf keepalived.conf.bak

vi /etc/keepalived/keepalived.conf

# master节点1配置
! Configuration File for keepalived

global_defs {
   router_id NGINX_MASTER
}

vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192   # 网卡名称        
    mcast_src_ip 10.1.16.160 # 服务器IP   
    virtual_router_id 51   #vrrp路由ID实例,每个实例唯一
    priority 100       # 权重
    nopreempt
    advert_int 2   # 指定vrrp心跳包通告间隔时间,默认1s
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.1.16.202/24   # 虚拟VIP
    }
    track_script {
        chk_apiserver   # 健康检查脚本
    }
}

# master节点2配置
! Configuration File for keepalived
global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP1
    interface ens192
    mcast_src_ip 10.1.16.161
    virtual_router_id 51
    priority 99
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.1.16.202/24
    }
    track_script {
        chk_apiserver
    }
}

# master节点2配置
! Configuration File for keepalived
global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP2
    interface ens192
    mcast_src_ip 10.1.16.162
    virtual_router_id 51
    priority 98
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.1.16.202/24
    }
    track_script {
        chk_apiserver
    }
}

#  健康检查脚本
vi  /etc/keepalived/check_apiserver.sh

#!/bin/bash
err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

赋权:
chmod 644 /etc/keepalived/check_apiserver.sh
chmod 644 /etc/keepalived/keepalived.conf

启动:
systemctl daemon-reload
systemctl start keepalived && systemctl enable keepalived
systemctl restart keepalived
systemctl status keepalived


# 查看VIP,在master上看
[root@master nginx]# ip addr
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9d:e5:7a brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    inet 10.1.16.160/24 brd 10.1.16.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever
    inet 10.1.16.202/24 scope global secondary ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9d:e57a/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

测试:停止master的nginx就会发现10.1.16.202这个IP漂移到master2服务器上,重启master的nginx和keepalived后,IP还会漂移回master

5.3 使用kubeadm部署Kubernetes

# 下面在各节点安装kubeadm和kubelet,创建yum源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast

# 如已经安装了相关组件,建议先彻底删除

# 重置kubernetes服务,重置网络。删除网络配置,link
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig docker0 down
ip link delete cni0
systemctl start docker
systemctl start kubelet

# 删除kubernetes相关软件
yum -y remove kubelet kubeadm kubectl
rm -rvf $HOME/.kube
rm -rvf ~/.kube/
rm -rvf /etc/kubernetes/
rm -rvf /etc/systemd/system/kubelet.service.d
rm -rvf /etc/systemd/system/kubelet.service
rm -rvf /usr/bin/kube*
rm -rvf /etc/cni
rm -rvf /opt/cni
rm -rvf /var/lib/etcd
rm -rvf /var/etcd

# 查看kubelet kubeadm kubectl版本
yum list kubelet kubeadm kubectl  --showduplicates | sort -r

# 安装k8s软件包,master和node都需要
yum install -y  kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2

systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

kubernetes相关命令:
systemctl enable kubelet
systemctl restart kubelet
systemctl stop kubelet
systemctl start kubelet
systemctl status kubelet
kubelet --version

注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的,kubeadm 安装k8s,k8s 控制节点和工作节点的组件,都是基于 pod 运行的,只要 pod 启动,就需要 kubelet 
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

5.4 kubeadm 初始化

使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。
基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml

# 新建kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
# localAPIEndpoint:
#   advertiseAddress: 10.1.16.160
#   bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
controlPlaneEndpoint: 10.1.16.202:16443  # 控制平面使用虚拟IP
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 20.244.0.0/16  # 指定pod网段
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。
在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。

# 拉取所k8s需要的容器镜像
kubeadm config images pull --config kubeadm.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

# 如果出现无法下载的问题,可以线下导出导入
ctr -n k8s.io image export kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
ctr -n k8s.io image import kube-proxy.tar

# 使用kubeadm初始化集群
kubeadm init --config kubeadm.yaml

# 查看初始化结果
[root@HQIOTMASTER10L yum.repos.d]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hqiotmaster10l kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.16.169 10.1.16.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0715 16:18:15.468503   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0715 16:18:15.544132   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0715 16:18:15.617290   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0715 16:18:15.825899   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.523308 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0715 16:18:51.448813   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c
        
        
上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 
# 其中有以下关键内容:
	• [certs]生成相关的各种证书
	• [kubeconfig]生成相关的kubeconfig文件
	• [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
	• [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
	• [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
	• [addons]安装基本插件:CoreDNS, kube-proxy


# 配置使用kubectl访问集群:
rm -rvf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


# 查看一下集群状态,确认个组件都处于healthy状态
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                           STATUS             MESSAGE                         ERROR
scheduler                      Healthy            ok                              
controller-manager             Healthy            ok                              
etcd-0                         Healthy            {"health":"true","reason":""}   

# 验证 kubectl
[root@k8s-master-0 ~]# kubectl get nodes
NAME             STATUS     ROLES           AGE     VERSION
hqiotmaster07l   NotReady   control-plane   2m12s   v1.28.2

5.5 扩容k8s集群,添加master

# 1. 从节点拉取镜像
# 将kubeadm.yaml传送到master2、master3,提前拉取所需镜像
kubectl config images pull --config=kubeadm.yaml

# 2.将master节点证书拷贝到其余master节点
mkdir -p /etc/kubernetes/pki/etcd/

scp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.* master3:/etc/kubernetes/pki/etcd/

# 3.在master主节点生成token
[root@master etcd]# kubeadm token create --print-join-command
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918

# 4.将master2、master3加入集群,成为控制节点
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918 --control-plane

成功结果:Run 'kubectl get nodes' to see this node join the cluster.

# 5.master2/3执行kubectl访问集群
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


# 6.查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
master    NotReady   control-plane   97m   v1.28.2
master2   NotReady   control-plane   85m   v1.28.2
master3   NotReady   control-plane   84m   v1.28.2

5.6 添加node节点进入集群

# 1.将node1加入集群作为工作节点

[root@node1 containerd]# kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918

成功标志:Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 在任意master节点查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master    NotReady   control-plane   109m   v1.28.2
master2   NotReady   control-plane   97m    v1.28.2
master3   NotReady   control-plane   96m    v1.28.2
node1     NotReady   <none>          67s    v1.28.2

# 2.修改node节点 ROLES
[root@master k8s]# kubectl label node node1 node-role.kubernetes.io/worker=worker
node/node1 labeled
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master    NotReady   control-plane   110m    v1.28.2
master2   NotReady   control-plane   98m     v1.28.2
master3   NotReady   control-plane   97m     v1.28.2
node1     NotReady   worker          2m48s   v1.28.2

六、etcd配置为高可用状态

# 修改master、master2、master3上的配置文件etcd.yaml
vi /etc/kubernetes/manifests/etcd.yaml

将
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380
修改为
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380,hqiotmaster11l=https://10.1.16.161:2380,hqiotmaster12l=https://10.1.16.162:2380

6.1 查看etcd集群是否配置成功

# etcdctl下载地址:https://github.com/etcd-io/etcd/releases
cd etcd-v3.5.9-linux-amd64
cp etcd* /usr/local/bin

[root@HQIOTMASTER07L ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list
42cd16c4205e7bee, started, hqiotmaster07l, https://10.1.16.160:2380, https://10.1.16.160:2379, false
bb9be9499c3a8464, started, hqiotmaster09l, https://10.1.16.162:2380, https://10.1.16.162:2379, false
c8761c7050ca479a, started, hqiotmaster08l, https://10.1.16.161:2380, https://10.1.16.161:2379, false

[root@HQIOTMASTER07L ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://10.1.16.160:2379,https://10.1.16.161:2379,https://10.1.16.162:2379 endpoint status --cluster
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.1.16.160:2379 | 42cd16c4205e7bee |   3.5.9 |   15 MB |     false |      false |        11 |    2905632 |            2905632 |        |
| https://10.1.16.162:2379 | bb9be9499c3a8464 |   3.5.9 |   15 MB |     false |      false |        11 |    2905632 |            2905632 |        |
| https://10.1.16.161:2379 | c8761c7050ca479a |   3.5.9 |   16 MB |      true |      false |        11 |    2905632 |            2905632 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

 

标签:原生,10.1,kubernetes,kubelet,--,etc,V1.28,docker,K8S
From: https://www.cnblogs.com/tianxiang2046/p/18344521

相关文章

  • 在K8S中,Deployment的升级过程是什么?
    在Kubernetes中,Deployment提供了一种非常强大的方式来更新应用,同时保持应用的可用性。以下是使用Deployment进行应用升级的过程,包括滚动更新、回滚、暂停和恢复更新等操作。1.升级过程准备工作:确保你有一个现有的Deployment。准备新的容器镜像或者其他需要更改的配置......
  • 在K8S中,deployment的创建过程包括什么?
    在Kubernetes(K8S)中,Deployment的创建过程是一个涉及多个组件和步骤的复杂流程。以下是一个详细的步骤说明,包括从创建Deployment配置文件到Pod在集群中实际运行的整个过程:1.创建Deployment配置文件首先,需要准备一个YAML格式的配置文件,用于定义Deployment的属性。这个配置文件包含......
  • 在K8S中,Deployment和Statefulset有何区别?
    在Kubernetes中,Deployment和StatefulSet都是用来管理应用的资源对象,但是它们的设计目的和服务场景有所不同。下面详细解释这两种资源的区别:1.Deployment用途:Deployment主要用于管理无状态应用,即那些不需要持久化数据或者不需要唯一网络标识符的应用。它提供了一种简......
  • 【云原生】恰当运用kubernetes中三种探针,确保应用程序在Kubernetes集群中保持健康、可
    ✨✨欢迎大家来到景天科技苑✨✨......
  • ArgoWorkflow 教程(一)--DevOps 另一选择?云原生 CICD 初体验
    本文主要记录了如何使用ArgoWorkflow构建流水线,以及ArgoWorkflow中的Workflow、Template等概念模型。本文主要分析以下问题:1)如何创建流水线2)Workflow、Template、template自己的引用关系3)Workflow和Template之间的参数传递问题4)ArgoWorkflow流水线最佳实践1......
  • Unity Gyro Camera ---- 传感器控制摄像头旋转 + 正北校准 (纯原生支持Android+IOS,无需
    UnityGyroCamera传感器控制摄像头旋转+正北校准纯原生支持Android+IOS,无需安装ARKit,ARCore等插件这篇文章主要介绍如何利用手机原生的传感器,控制摄像头的旋转,最终可以实现AR或者VR的摄像头旋转控制问题提出 虽然,目前有一些用手机传感器控制虚拟摄像头旋转的方案......
  • K8s和docker的关系
    k8s(kubernetes)是一个容器编排器,没容器的话也没编排。所以他是一个容器编排的系统,主要围绕pods进行工作。pods是k8s生态中最小的调度单位,可以包容一个或者多个容器。k8s是一个docker容器的管理工具核心功能:自愈:重启启动失败的容器,在节点不可用时,替换节点上的容器,对用户定义的不......
  • 云原生周刊:Knative 1.15 版本发布|2024.8.5
    开源项目推荐helm-secretshelm-secrets是一个Helm插件,用于动态解密加密的Helm值文件。TofuControllerTofuController(以前称为WeaveTF-Controller)是Flux的一个控制器,用于以GitOps方式协调OpenTofu和Terraform资源。TracetestTracetest是一个使用OpenTelem......
  • 云原生数据基础设施之kubeblocks
    一、kubeblocks简介参考文档:https://kubeblocks.io/docs/release-0.9/user_docs/overview/introduction1.KubeBlocks是什么?​KubeBlocks是基于Kubernetes的云原生数据基础设施,将顶级云服务提供商的大规模生产经验与增强的可用性和稳定性改进相结合,帮助用户轻松构建容器化......
  • kubeadm安装k8s
    目录1.环境准备2.所有节点安装docker3.所有节点安装kubeadm4.Nginx负载均衡部署5.部署K8S集群6.所有节点部署网络插件flannel1.环境准备(1)在所有节点上安装Docker和kubeadm(2)部署KubernetesMaster(3)部署容器网络插件(4)部署KubernetesNode,将节点加入Kubernetes集群中(5)部署Das......