一、k8s高可用架构解析
etcd是键值数据库用来存储集群信息
apiserver是集群的整个控制中心,所有的流量都会经过他
ControllerManager控制器,监控整个集群的状态
Scheduler调度器
如果master也要部署pod的话需要安装kubelet跟kube-proxy组件
图中被勾的组件都是通过访问LB提供的vip去访问apiserver的,只有apiserver能访问etcd数据库。
LB会进行选主,会把vip挂载到其中一台master节点的网卡上面
面试点:
集群master可以是偶数个,etcd必须是奇数个,因为使用的是raft算法。
1.1 安装说明
二进制安装方式比kubeadm的稳定性更高。
二进制安装证书有效期可以设置为100年,kubeadm证书有效期为一年。
二进制重启恢复速度比kubeadm快。
升级方式不一样,二进制替换包就行,所以二进制维护比kubeadm简单。
注意:关掉电脑时,没关虚拟机,那么kubeadm就可能启动不了了
基本环境配置跟内核升级都跟kubeadm一样,这里就不多写了,可参考:内核升级
1.2 下载软件
二进制文件是可以直接运行的,所以下载文件放到相应目录即安装完成。
需要注意的是先确定装什么样的版本,alpha是内测版本,不建议使用。
目前最新版本v1.25,我这里安装的是v1.23.13版本,官方文档:https://kubernetes.io/zh-cn/
Master01下载kubernetes安装包,这里选择v1.23
下载链接:https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
选择amd64架构的下载
里面会有其他组件版本介绍,可以看到k8s已经支持etcd 3.5.1的版本
软件包下载:百度网盘
以下操作在master01执行
#下载软件包
wget https://dl.k8s.io/v1.23.13/kubernetes-server-linux-amd64.tar.gz
#解压kubernetes安装文件
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
#解压etcd安装文件
tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl}
#版本查看
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.23.12
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.1
API version: 3.5
#将组件发送到其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
#所有节点创建/opt/cni/bin目录
mkdir -p /opt/cni/bin
#Master01节点切换到1.23.x分支
cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x
二、生成证书
证书之间的生成关系是:证书签名请求文件CSR→CA根证书文件→其他证书文件
证书在master01节点上生成,然后再把证书文件发送到各个节点上。
Master01下载生成证书工具
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
#如果是上传到服务器上的执行
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2.1 etcd证书
#所有Master节点创建etcd证书目录
mkdir /etc/etcd/ssl -p
#所有节点创建kubernetes相关目录,这里保持跟kubeadm同样的证书目录,方便找
mkdir -p /etc/kubernetes/pki
Master01节点生成etcd证书
生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
cd /root/k8s-ha-install/pki
#生成etcd CA证书和CA证书的key,这里的etcd-ca-csr.json里面设置了证书过期时间
#他们之间的联系是,通过证书签名请求文件生成根证书,再通过根证书生成其他证书
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
#证书的生成会绑定ip地址跟域名,这里绑定的是etcd的主机名跟ip地址,因为实验环境中etcd是装在master节点上的,所以用的是master的主机名跟地址,也可以多写几个预留地址
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8smaster03,10.103.236.201,10.103.236.202,10.103.236.203 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
#该目录下会生成6个证书文件
ls /etc/etcd/ssl/
#将证书复制到其他节点
MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
2.2 apiserver证书
Master01生成kubernetes证书
[root@k8s-master01 pki]# cd /root/k8s-ha-install/pki
#生成根证书,会有3个文件
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
ls /etc/kubernetes/pki/
# 192.168.0.是k8s service的网段的第一个IP地址,注意掩码划分。
# 如果不是高可用集群,10.103.236.236为Master01的IP
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-hostname=192.168.0.1,10.103.236.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.103.236.201,10.103.236.202,10.103.236.203 \
-profile=kubernetes \
apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
#可以看到有3个以apiserver开头的文件
ls /etc/kubernetes/pki/
2.3 生成聚合证书
apiserver的证书划分的权限很大,聚合证书划分的权限较小,用于其他第三方组件授权使用,如Metrics-server组件,避免安全问题。
cfssl gencert \
-initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
#返回结果,可以忽略WARNING警告
2.4 controller-manage证书
生成controller-manage的证书,controller需要跟apiserver通讯,所以需要证书。
如下步骤会将证书相关信息跟apiserver的地址写入controller-manager.kubeconfig文件中,controller-manage会带着这个文件去跟apiserver进行通信。
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 注意,如果不是高可用集群,10.103.236.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项,集群名字为kubernetes,因为K8s可以管理多个集群,多个集群使用名字进行区分
#--embed-certs=true 会将ca.pem证书内容导入到controller-manager.kubeconfig文件中,如果不为true,则导入ca.pem证书路径,推荐导入内容
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.103.236.236:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#上述的操作是将集群项信息导入了controller-manager.kubeconfig文件中
# 设置上下文,相当于配置了一个系统用户kube-controller-manager
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
#设置一个用户项,将证书导入controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置默认集群环境,kubeconfig文件中可以配置多个集群,这里指定kubernetes为默认集群环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
2.5 scheduler证书
生成scheduler证书,用于跟apiserver进行通信
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 注意,如果不是高可用集群,10.103.236.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
#设置集群项
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.103.236.236:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#设置上下文
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
#设置用户项
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
# 设置默认集群环境
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
2.6 管理员证书证书
生成管理员证书admin.kubeconfig,相当于kubeadm的admin.conf文件,用于kubectl访问集群
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用集群,10.103.236.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
#设置集群项
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.103.236.236:8443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
#设置上下文
kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem \
--client-key=/etc/kubernetes/pki/admin-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
#设置用户项
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
#设置默认集群环境
kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
ls /etc/kubernetes/admin.kubeconfig
#不配置环境变量的情况下,指定该用户证书就对该集群执行命令了
kubectl --kubeconfig /etc/kubernetes/admin.kubeconfig get node
生成密钥对,用创建ServiceAccount Key的token。
#生成私钥
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
#生成公钥
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
创建ServiceAccount(服务账户)时会生成对应的secret, 这个secret里面会有一个token,这个token就是controller-manage基于这两个证书来生成的。
发送证书至其他节点
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
查看证书文件
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr apiserver.csr ca.csr controller-manager.csr front-proxy-ca.csr front-proxy-client.csr sa.key scheduler-key.pem
admin-key.pem apiserver-key.pem ca-key.pem controller-manager-key.pem front-proxy-ca-key.pem front-proxy-client-key.pem sa.pub scheduler.pem
admin.pem apiserver.pem ca.pem controller-manager.pem front-proxy-ca.pem front-proxy-client.pem scheduler.csr
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l
23 #一共有23个证书
三、高可用配置
高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
Slb -> haproxy -> apiserver
3.1 配置haproxy
所有Master节点安装keepalived和haproxy
yum install keepalived haproxy -y
所有Master配置HAProxy,配置都一样,为了跟kubeadm区分,这里把端口号改为了8443,kubeadm使用的是16443
vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 10.103.236.201:6443 check
server k8s-master02 10.103.236.202:6443 check
server k8s-master03 10.103.236.203:6443 check
3.2 配置keepalived
Master01 keepalived的配置
所有Master节点配置KeepAlived,配置不一样,注意区分每个节点的IP和网卡(interface参数)
[root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 10.103.236.201
virtual_router_id 51
priority 101
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.103.236.236
}
track_script {
chk_apiserver
} }
Master02 keepalived的配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 10.103.236.202
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.103.236.236
}
track_script {
chk_apiserver
} }
Master03 keepalived的配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 10.103.236.203
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
10.103.236.236
}
track_script {
chk_apiserver
} }
3.3 健康检查配置
所有master节点
[root@k8s-master01 keepalived]# vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
添加权限
chmod +x /etc/keepalived/check_apiserver.sh
所有master节点启动haproxy和keepalived
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
验证
#启动之后看下端口起来没,端口是8443
netstat -lntp | grep 8443
#看日志
tail -f /var/log/message
#看下10.103.236.236绑定在哪台master节点上了,通过IP a看
ping 10.103.236.236
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
telnet 10.103.236.236 8443
如果ping不通且telnet没有出现 ],则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
四、系统组件配置
4.1 etcd
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
即便是单节点的master,etcd也需要三台起,因为etcd是集群的数据库,挂掉数据就会丢失
master01配置如下
vim /etc/etcd/etcd.config.yml
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.103.236.201:2380'
listen-client-urls: 'https://10.103.236.201:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.103.236.201:2380'
advertise-client-urls: 'https://10.103.236.201:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.103.236.201:2380,k8s-master02=https://10.103.236.202:2380,k8s-master03=https://10.103.236.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
需要注意第一行name: 'k8s-master01',以及下面的Ip地址别写错了。
master02配置如下
vim /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.103.236.202:2380'
listen-client-urls: 'https://10.103.236.202:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.103.236.202:2380'
advertise-client-urls: 'https://10.103.236.202:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.103.236.201:2380,k8s-master02=https://10.103.236.202:2380,k8s-master03=https://10.103.236.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
master03配置如下
vim /etc/etcd/etcd.config.yml
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.103.236.203:2380'
listen-client-urls: 'https://10.103.236.203:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.103.236.203:2380'
advertise-client-urls: 'https://10.103.236.203:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.103.236.201:2380,k8s-master02=https://10.103.236.202:2380,k8s-master03=https://10.103.236.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
创建Service
二进制是以linux守护进程启动的所以需要创建service文件
所有Master节点创建etcd service并启动
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
因为上述配置的证书文件在/etc/kubernetes/pki/etcd/,而我们生成的目录在/etc/etcd/ssl/,所以需要做一个软链接
所有Master节点创建etcd的证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
#注意三个节点要一起启动,不然可能会失败
systemctl enable --now etcd
#查看是否有无报错
tail -f /var/log/messages #看到的都是info日志那就是没问题
查看etcd状态
$ export ETCDCTL_API=3
etcdctl --endpoints="10.103.236.203:2379,10.103.236.202:2379,10.103.236.201:2379" \
--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \
--cert=/etc/kubernetes/pki/etcd/etcd.pem \
--key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
注意LEADER,两个false一个true表示一主两从,主写从读,主挂了会自动选主
4.2 apiserver
所有master节点创建相关目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
注意,如果不是高可用集群,10.103.236.236改为master01的地址,本文档使用的k8s service网段为192.168.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
Master01配置
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.103.236.201 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.103.236.201:2379,https://10.103.236.202:2379,https://10.103.236.203:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
master02配置
注意本文档使用的k8s service网段为192.168.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.103.236.202 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.103.236.201:2379,https://10.103.236.202:2379,https://10.103.236.203:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
master03配置
注意本文档使用的k8s service网段为192.168.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=10.103.236.203 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://10.103.236.201:2379,https://10.103.236.202:2379,https://10.103.236.203:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
启动apiserver
#所有Master节点开启kube-apiserver
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
# 没有疯狂的输出日志,那么安装就没问题
tail -f /var/log/messages
出现如下表示启动成功
4.3 ControllerManager
所有Master节点配置kube-controller-manager service(所有master节点配置一样)
注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
所有Master节点启动kube-controller-manager
systemctl daemon-reload
systemctl enable --now kube-controller-manager
I开头的代表info日志,是没有问题的,如果是E开头或者F开头的需要注意一下
4.4 Scheduler
所有Master节点配置kube-scheduler service(所有master节点配置一样)
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
scheduler.kubeconfig 这个文件就是和集群通信的文件,里面是生成的证书
启动
systemctl daemon-reload
systemctl enable --now kube-scheduler
查看日志
tail -f /var/log/messages
五、TLS Bootstrapping配置
controller-manager和scheduler跟apiserver进行通信靠的是xxx.kubeconfig文件,每个节点都是一样的kubeconfig配置文件。
kubelet也需要跟apiserver进行通信,但是kubelet跟主机是有绑定关系的,绑定了主机名跟IP地址,每个节点的信息都是不一样的,而且 可能会有几百个节点,所以需要使用bootstrapping自动申请证书。
kubelet启动时指定bootstrap-kubelet.kubeconfig文件就能自动去申请证书。
注意,如果不是高可用集群,10.103.236.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
只需要在Master01创建bootstrap
cd /root/k8s-ha-install/bootstrap
#设置集群项
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://10.103.236.236:8443 \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#设置上下文
kubectl config set-credentials tls-bootstrap-token-user \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#设置用户项
kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
#设置默认集群环境
kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
注意:如果修改了上下文中的token,则也要修改bootstrap.secret.yaml的name,token-id和token-secret
admin.kubeconfig跟之前kubeadm的admin.conf文件一样,如果使用kubectl访问集群,直接拷贝这个文件到如下目录就行
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
创建token
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
六、Node节点配置
6.1 kubelet
Master01节点复制证书至Node节点
cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
ssh $NODE mkdir -p /etc/kubernetes/pki
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
所有节点创建相关目录,master节点也配置了kubelet
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有节点配置kubelet service
[root@k8s-master01 bootstrap]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
如果Runtime为Containerd,请使用如下Kubelet的配置:
如下配置单独指定了/etc/kubernetes/kubelet-conf.yml配置文件
所有节点配置kubelet service的配置文件(也可以写到kubelet.service):
# Runtime为Containerd
# vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
如果Runtime为Docker,请使用如下Kubelet的配置:
# Runtime为Docker
# vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
创建kubelet的配置文件,如果以后要更改kubelet的配置就改这个文件
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如192.168.0.10
[root@k8s-master01 bootstrap]# vim /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 192.168.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
启动所有节点kubelet
systemctl daemon-reload
systemctl enable --now kubelet
此时系统日志/var/log/messages显示只有如下两种信息为正常,没有循环打印日志就是正常的,如下报错信息为cailico未安装,安装calico后即可恢复
Unable to update cni config: no networks found in /etc/cni/net.d
如果有很多报错日志,或者有大量看不懂的报错,说明kubelet的配置有误,需要检查kubelet配置
查看集群状态(Ready或NotReady都正常)
[root@k8s-master01 bootstrap]# kubectl get node
6.2 kube-proxy
注意,如果不是高可用集群,10.103.236.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
以下操作只在Master01执行
cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.103.236.236:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
将kubeconfig发送至其他节点
for NODE in k8s-master02 k8s-master03; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
for NODE in k8s-node01 k8s-node02; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
所有节点添加kube-proxy的配置和service文件:
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段:以及mode=ipvs
vim /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
所有节点启动kube-proxy
systemctl daemon-reload
systemctl enable --now kube-proxy
看下日志如果出现如下所示表示正常
查看svc,pod跟apiserver通讯都是走的如下地址
七、Calico安装
建议安装官方推荐版本
以下步骤只在master01执行
cd /root/k8s-ha-install/calico/
更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段
cp calico.yaml calico.yaml.bak
sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
#检查
grep "IPV4POOL_CIDR" calico.yaml -A 1
kubectl apply -f calico.yaml
#查看容器状态
kubectl get po -n kube-system
八、CoreDNS安装
安装官方推荐版本(推荐)
cd /root/k8s-ha-install/
如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
跟/etc/kubernetes/kubelet-conf.yml文件中的一致
COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml
安装coredns
kubectl create -f CoreDNS/coredns.yaml
安装最新版CoreDNS(不推荐)
安装k8s推荐的版本就行了,没必要追求最新版本
COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0
git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
# ./deploy.sh -s -i ${COREDNS_SERVICE_IP} | kubectl apply -f -
查看状态
# kubectl get po -n kube-system -l k8s-app=kube-dns
九、安装Metrics Server
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
k8s-ha-install目录中有metrics-server跟kubeadm-metrics-server两个目录文件,他们之间的区别对比.
vimdiff kubeadm-metrics-server/comp.yaml metrics-server/comp.yaml
可以发现只有证书部署位置不一样,kubeadm自动生成的证书文件是以.crt结尾的,二进制cfssl生成的证书文件是.pem结尾的,用的是之前生成的聚合证书。
安装metrics server
cd /root/k8s-ha-install/metrics-server
kubectl create -f .
等待metrics server启动然后查看状态
# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 231m 5% 1620Mi 42%
k8s-master02 274m 6% 1203Mi 31%
k8s-master03 202m 5% 1251Mi 32%
k8s-node01 69m 1% 667Mi 17%
k8s-node02 73m 1% 650Mi 16%
十、安装dashboard
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
安装指定版本dashboard(推荐)
cd /root/k8s-ha-install/dashboard/
kubectl create -f .
查看状态
[root@k8s-master01 dashboard]# kubectl get po -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7fcdff5f4c-kbvx8 1/1 Running 0 59s
kubernetes-dashboard-85f59f8ff7-25zrp 1/1 Running 0 59s
查看暴露的端口号
[root@k8s-master01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 192.168.208.237 <none> 8000/TCP 98s
kubernetes-dashboard NodePort 192.168.166.30 <none> 443:30812/TCP 98s
安装最新版(可以研究一下)
官方GitHub地址:https://github.com/kubernetes/dashboard
可以在官方dashboard查看到最新版dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
#地址改为页面上最新的地址
创建管理员用户
vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
#应用yaml配置文件
kubectl apply -f admin.yaml -n kube-system
登录dashboard参考kubeadm,如果是谷歌浏览器需要修改属性。
更改dashboard的svc为NodePort,这样可以使用宿主机加端口访问到dashboard
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
29 selector:
30 k8s-app: kubernetes-dashboard
31 sessionAffinity: None
32 type: NodePort #在32行的位置修改为NodePort
查看端口号
[root@k8s-master01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 192.168.208.237 <none> 8000/TCP 98s
kubernetes-dashboard NodePort 192.168.166.30 <none> 443:30812/TCP 98s
访问Dashboard:https://10.103.236.201:30812
关于token获取,看kubeadm笔记。
十一、集群验证
注意:因为二进制安装master没有配置污点,是可以部署Pod的。
一个成功的集群具备如下条件
-
Pod必须能解析Service\、
-
Pod必须能解析跨namespace的Service
-
每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
-
Pod和Pod之前要能通
- 同namespace能通信
- 跨namespace能通信
- 跨机器能通信
下面逐步验证
#配置一个busybox
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
kubectl get pod
参看是否能解析svc
[root@k8s-master01 dashboard]# kubectl exec busybox -n default -- nslookup kubernetes
Server: 192.168.0.10
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 192.168.0.1 kubernetes.default.svc.cluster.local
查看是否能解析跨namespace的Service
[root@k8s-master01 dashboard]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 192.168.0.10
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
所有节点测试是否能访问443跟53端口
yum -y install telnet -y
[root@k8s-master01 dashboard]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 32h
telnet 192.168.0.1 443
#查看kube-dns
[root@k8s-master01 dashboard]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-typha ClusterIP 192.168.229.37 <none> 5473/TCP 29h
kube-dns ClusterIP 192.168.0.10 <none> 53/UDP,53/TCP,9153/TCP 29h
metrics-server ClusterIP 192.168.48.97 <none> 443/TCP 12m
[root@k8s-master01 dashboard]# telnet 192.168.0.10 53
[root@k8s-master01 dashboard]# curl telnet 192.168.0.10:53
curl: (6) Could not resolve host: telnet; Unknown error
curl: (52) Empty reply from server #提示这个表示正常
#测试pod之间通信
[root@k8s-master01 dashboard]# kubectl get po -n kube-system -owide
...省略...
[root@k8s-master01 dashboard]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 5m24s 172.27.14.193 k8s-node02 <none> <none>
#进入这个容器,有些容器是没有bash工具的进不了正常,换成busybox试试
kubectl exec -it busybox -- sh #或者
kubectl exec -it busybox -- bash
#注意如果进入系统组件容器要加命名空间,如
kubectl exec -it calico-kube-coxxxx -n kube-system -- sh
kubectl exec -it calico-kube-coxxxx -n kube-system -- bash
ping 其他节点
其他测试
#创建一个pod
kubectl run nginx --image=nginx
#创建一个deploy,有3个副本
kubectl create deploy nginx --image=nginx --replicas=3
kubectl get deploy
kubectl get po -owide #查看这3个pod部署哪个节点上
#删除
kubectl delete deploy nginx
kubectl delete po busybox nginx