首页 > 其他分享 >二进制方式部署K8S-v1.23.6(中)

二进制方式部署K8S-v1.23.6(中)

时间:2022-11-08 22:04:46浏览次数:52  
标签:kube kubernetes 二进制 -- v1.23 master 101 K8S root

5、部署k8s
5-1、下载安装包
#master-101执行:
#下载Kubernetes软件包并解压安装(建议使用某雷下载), 此处以v1.23.6为例:
#下载地址:https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz

[root@master-101 ~]#cd /app/k8s-init/tools/
[root@master-101 tools]#tar xf kubernetes-server-linux-amd64.tar.gz
[root@master-101 tools]#cd kubernetes/server/bin/
[root@master-101 bin]#ls
apiextensions-apiserver kube-controller-manager.tar kube-scheduler.tar
kube-aggregator kube-log-runner kubeadm
kube-apiserver kube-proxy kubectl
kube-apiserver.docker_tag kube-proxy.docker_tag kubectl-convert
kube-apiserver.tar kube-proxy.tar kubelet
kube-controller-manager kube-scheduler mounter
kube-controller-manager.docker_tag kube-scheduler.docker_tag

[root@master-101 bin]#cp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubeadm kubectl /usr/local/bin
[root@master-101 bin]#kubectl version
#master-101执行:
#将kubernetes相关软件包复制到其它机器中

[root@master-101 bin]#scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubeadm kubectl 192.168.100.102:/usr/local/bin/
[root@master-101 bin]#scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubeadm kubectl 192.168.100.103:/usr/local/bin/

#node节点只需要kube-proxy、kubelet、kubeadm
[root@master-101 bin]#scp kube-proxy kubelet kubeadm 192.168.100.104:/usr/local/bin/
[root@master-101 bin]#scp kube-proxy kubelet kubeadm 192.168.100.105:/usr/local/bin/
#所有节点执行:
[root@master-101 bin]#mkdir -vp /etc/kubernetes/{manifests,pki,ssl,cfg} /var/log/kubernetes /var/lib/kubelet
5-2、部署APIserver

描述: 它是集群所有服务请求访问的入口点, 通过 API 接口操作集群中的资源

#master-101执行:
#创建apiserver证书请求文件并使用已生成的CA签发证书
#创建证书申请文件,注意下述文件hosts字段中IP为所有Master/LB/VIP IP,为了方便后期扩容可以多写几个预留的IP。
# 同时还需要填写 service 网络的首个IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1)。
[root@master-101 k8s-init]#tee apiserver-csr.json <<'EOF'
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.100.101",
"192.168.100.102",
"192.168.100.103",
"192.168.100.111",
"10.96.0.1",
"wang.cluster.k8s",
"master-101",
"master-102",
"master-103",
"kubernetes",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "beijing",
"ST": "beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

#签发kube-apiserver HTTPS证书
[root@master-101 k8s-init]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver

[root@master-101 k8s-init]#ls apiserver*
apiserver-csr.json apiserver-key.pem apiserver.csr apiserver.pem

# 复制到自定义目录
[root@master-101 k8s-init]#cp *.pem /etc/kubernetes/ssl/

#创建TLS机制所需TOKEN
[root@master-101 k8s-init]#cat > /etc/kubernetes/bootstrap-token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:bootstrappers"
EOF

温馨提示: 启用 TLS BootsTRAPping 机制Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootsTRAPing机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书
#master-101执行:
#创建kube-apiserver配置文件
[root@master-101 k8s-init]#cat > /etc/kubernetes/cfg/kube-apiserver.conf <<'EOF'
KUBE_APISERVER_OPTS="--apiserver-count=3 \
--advertise-address=192.168.100.101 \
--allow-privileged=true \
--authorization-mode=RBAC,Node \
--bind-address=0.0.0.0 \
--enable-aggregator-routing=true \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/bootstrap-token.csv \
--secure-port=6443 \
--service-node-port-range=30000-32767 \
--service-cluster-ip-range=10.96.0.0/16 \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/apiserver-key.pem \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
--etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
--etcd-servers=https://192.168.100.101:2379,https://192.168.100.102:2379,https://192.168.100.103:2379 \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--proxy-client-cert-file=/etc/kubernetes/ssl/apiserver.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--v=2 \
--event-ttl=1h \
--feature-gates=TTLAfterFinished=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes"
EOF

#温馨提示:
# 审计日志可选
# --audit-log-maxage=30
# --audit-log-maxbackup=3
# --audit-log-maxsize=100
# --audit-log-path=/var/log/kubernetes/kube-apiserver.log"

# –logtostderr:启用日志
# —v:日志等级
# –log-dir:日志目录
# –etcd-servers:etcd集群地址
# –bind-address:监听地址
# –secure-port:https安全端口
# –advertise-address:集群通告地址
# –allow-privileged:启用授权
# –service-cluster-ip-range:Service虚拟IP地址段
# –enable-admission-plugins:准入控制模块
# –authorization-mode:认证授权,启用RBAC授权和节点自管理
# –enable-bootstrap-token-auth:启用TLS bootstrap机制
# –token-auth-file:bootstrap token文件
# –service-node-port-range:Service nodeport类型默认分配端口范围
# –kubelet-client-xxx:apiserver访问kubelet客户端证书
# –tls-xxx-file:apiserver https证书
# –etcd-xxxfile:连接Etcd集群证书
# –audit-log-xxx:审计日志

# 温馨提示: 在 1.23.* 版本之后请勿使用如下参数。
Flag --enable-swagger-ui has been deprecated,
Flag --insecure-port has been deprecated,
Flag --alsologtostderr has been deprecated,
Flag --logtostderr has been deprecated, will be removed in a future release,
Flag --log-dir has been deprecated, will be removed in a future release,
Flag -- TTLAfterFinished=true. It will be removed in a future release. (还可使用)
#master-101执行:
#创建apiserver service文件
[root@master-101 k8s-init]#cat > /lib/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/cfg/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
LimitNPROC=65535

[Install]
WantedBy=multi-user.target
EOF
#master-101执行:#
#同步以上文件到集群的其它master节点上

[root@master-101 k8s-init]#scp -rp /etc/kubernetes 192.168.100.102:/etc/
[root@master-101 k8s-init]#scp -rp /etc/kubernetes 192.168.100.103:/etc/

[root@master-101 k8s-init]#scp /lib/systemd/system/kube-apiserver.service 192.168.100.103:/lib/systemd/system/kube-apiserver.service
[root@master-101 k8s-init]#scp /lib/systemd/system/kube-apiserver.service 192.168.100.102:/lib/systemd/system/kube-apiserver.service
#master-102、master-103分别修改/etc/kubernetes/cfg/kube-apiserver.conf 文件

#master-102执行:
[root@master-102 ~]#sed -i 's#--advertise-address=192.168.100.101#--advertise-address=192.168.100.102#g' /etc/kubernetes/cfg/kube-apiserver.conf

#master-103执行:
[root@master-103 ~]#sed -i 's#--advertise-address=192.168.100.101#--advertise-address=192.168.100.103#g' /etc/kubernetes/cfg/kube-apiserver.conf
#所有master节点执行:
[root@master-101 k8s-init]#systemctl daemon-reload
[root@master-101 k8s-init]#systemctl enable --now kube-apiserver.service && systemctl status kube-apiserver.servic

#测试api-server
[root@master-101 ~]#curl --insecure https://192.168.100.101:6443
[root@master-101 ~]#curl --insecure https://192.168.100.102:6443
[root@master-101 ~]#curl --insecure https://192.168.100.103:6443
[root@master-101 ~]#curl --insecure https://192.168.100.111:6443
#结果如下:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
5-3、部署kubectl

描述: 它是集群管理客户端工具,与 API-Server 服务请求交互, 实现资源的查看与管理

#master-101执行:
#创建kubectl证书请求文件CSR并生成证书

[root@master-101 k8s-init]#tee admin-csr.json <<'EOF'
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "beijing",
"ST": "beijing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

[root@master-101 k8s-init]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

[root@master-101 k8s-init]#ls admin*
admin-csr.json admin-key.pem admin.csr admin.pem

[root@master-101 k8s-init]#cp admin* /etc/kubernetes/ssl/
#master-101执行:
#生成kubeconfig配置文件
#admin.conf是kubectl的配置文件,包含访问apiserver的所有信息,如apiserver地址CA证书和自身使用的证书

[root@master-101 k8s-init]#cd /etc/kubernetes/

# 配置集群信息
# 此处也可以采用域名的形式 (https://waang.cluster.k8s:16443 )
[root@master-101 kubernetes]#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.111:16443 --kubeconfig=admin.conf

# 配置集群认证用户
[root@master-101 kubernetes]#kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --client-key=/etc/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=admin.conf

# 配置上下文
[root@master-101 kubernetes]#kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=admin.conf

# 使用上下文
[root@master-101 kubernetes]#kubectl config use-context kubernetes --kubeconfig=admin.conf
#master-101执行:
#准备kubectl配置文件并进行角色绑定
[root@master-101 k8s-init]#mkdir /root/.kube && cp /etc/kubernetes/admin.conf ~/.kube/config
[root@master-101 k8s-init]#kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config

#查看集群信息
[root@master-101 ~]#export KUBECONFIG=$HOME/.kube/config
[root@master-101 ~]#kubectl cluster-info
Kubernetes control plane is running at https://192.168.100.111:16443
CoreDNS is running at https://192.168.100.111:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump

#查看集群组件状态
[root@master-101 ~]#kubectl get componentstatuses

#查看命名空间以及所有名称中资源对象
[root@master-101 ~]#kubectl get all --all-namespaces
[root@master-101 ~]#kubectl get ns
#master-101执行:
[root@master-101 ~]#scp /root/.kube/config 192.168.100.102:/root/.kube/
[root@master-101 ~]#scp /root/.kube/config 192.168.100.103:/root/.kube/
#配置kubectl命令补全(建议新手勿用,等待后期熟悉相关命令后使用)
方法一:
[root@master-101 ~]#apt install -y bash-completion
[root@master-101 ~]#source /usr/share/bash-completion/bash_completion

[root@master-101 ~]#kubectl completion bash > ~/.kube/completion.bash.inc
[root@master-101 ~]#. ~/.kube/completion.bash.inc

[root@master-101 ~]#tee \$HOME/.bash_profile <<EOF
source ~/.kube/completion.bash.inc
EOF

#方法二:
[root@master-103 ~]#apt install -y bash-completion
[root@master-103 ~]#source /usr/share/bash-completion/bash_completion
[root@master-103 ~]#source <(kubectl completion bash)
[root@master-103 ~]#tee \$HOME/.bash_profile <<EOF
source <(kubectl completion bash)
EOF
5-4、部署controller-manager

描述: 它是集群中的控制器组件,其内部包含多个控制器资源, 实现对象的自动化控制中心

#master-101执行:
#创建kube-controller-manager证书请求文件CSR并生成证书

[root@master-101 k8s-init]#tee controller-manager-csr.json <<'EOF'
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"192.168.100.101",
"192.168.100.102",
"192.168.100.103"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "beijing",
"ST": "beijing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF

# 说明:
* hosts 列表包含所有 kube-controller-manager 节点 IP;
* CN 为 system:kube-controller-manager;
* O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限


#颁发 kube-controller-manager 证书文件
[root@master-101 k8s-init]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes controller-manager-csr.json |cfssljson -bare controller-manager

[root@master-101 k8s-init]#ls controller*

[root@master-101 k8s-init]#cp controller* /etc/kubernetes/ssl/
#master-101执行:

[root@master-101 k8s-init]#cd /etc/kubernetes/

[root@master-101 kubernetes]#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.111:16443 --kubeconfig=controller-manager.conf

[root@master-101 kubernetes]#kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/ssl/controller-manager.pem --client-[root@master-101 kubernetes]#key=/etc/kubernetes/ssl/controller-manager-key.pem --embed-certs=true --kubeconfig=controller-manager.conf

[root@master-101 kubernetes]#kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=controller-manager.conf

[root@master-101 kubernetes]#kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.conf
#master-101执行:
[root@master-101 kubernetes]#cat > /etc/kubernetes/cfg/kube-controller-manager.conf << "EOF"
KUBE_CONTROLLER_MANAGER_OPTS="--allocate-node-cidrs=true \
--bind-address=127.0.0.1 \
--secure-port=10257 \
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf \
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf \
--cluster-name=kubernetes \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--controllers=*,bootstrapsigner,tokencleaner \
--cluster-cidr=10.128.0.0/16 \
--service-cluster-ip-range=10.96.0.0/16 \
--use-service-account-credentials=true \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--tls-cert-file=/etc/kubernetes/ssl/controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/controller-manager-key.pem \
--leader-elect=true \
--cluster-signing-duration=87600h \
--v=2 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.conf
EOF
#master-101执行:
#创建kube-controller-manager service文件

[root@master-101 kubernetes]#cat > /lib/systemd/system/kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
#复制文件到其他master节点:
[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/controller-manager.pem /etc/kubernetes/ssl/controller-manager-key.pem /etc/kubernetes/controller-manager.conf /etc/kubernetes/cfg/kube-controller-manager.conf /lib/systemd/system/kube-controller-manager.service 192.168.100.102:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/controller-manager.pem /etc/kubernetes/ssl/controller-manager-key.pem /etc/kubernetes/controller-manager.conf /etc/kubernetes/cfg/kube-controller-manager.conf /lib/systemd/system/kube-controller-manager.service 192.168.100.103:/tmp

#master-102、master-103分别执行:
[root@master-103 ~]#cd /tmp/
[root@master-103 tmp]#mv controller-manager*.pem /etc/kubernetes/ssl
[root@master-103 tmp]#mv controller-manager.conf /etc/kubernetes/controller-manager.conf
[root@master-103 tmp]#mv kube-controller-manager.conf /etc/kubernetes/cfg/kube-controller-manager.conf
[root@master-103 tmp]#mv kube-controller-manager.service /lib/systemd/system/kube-controller-manager.service
#所有master节点执行:
[root@master-101 kubernetes]#systemctl daemon-reload
[root@master-101 kubernetes]#systemctl enable --now kube-controller-manager.service && systemctl status kube-controller-manager
5-5、部署scheduler

描述: 在集群中kube-scheduler调度器组件, 负责任务调度选择合适的节点进行分配任务

#master-101执行:
#创建kube-scheduler证书请求文件CSR并生成证书

[root@master-101 k8s-init]#tee scheduler-csr.json <<'EOF'
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.100.101",
"192.168.100.102",
"192.168.100.103"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "beijing",
"ST": "beijing",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF

#颁发kube-scheduler证书文件
[root@master-101 k8s-init]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare scheduler
[root@master-101 k8s-init]#ls scheduler*
scheduler-csr.json scheduler-key.pem scheduler.csr scheduler.pem
[root@master-101 k8s-init]#cp scheduler*.pem /etc/kubernetes/ssl/
#master-101执行:
#创建kube-scheduler的kubeconfig 配置文件

[root@master-101 k8s-init]#cd /etc/kubernetes/

[root@master-101 kubernetes]#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.111:16443 --kubeconfig=scheduler.conf

[root@master-101 kubernetes]#kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/ssl/scheduler.pem --client-key=/etc/kubernetes/ssl/scheduler-key.pem --embed-certs=true --kubeconfig=scheduler.conf

[root@master-101 kubernetes]#kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=scheduler.conf

[root@master-101 kubernetes]#kubectl config use-context system:kube-scheduler --kubeconfig=scheduler.conf
#master-101执行:
#创建kube-scheduler服务配置文件

[root@master-101 kubernetes]#cat > /etc/kubernetes/cfg/kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--secure-port=10259 \
--kubeconfig=/etc/kubernetes/scheduler.conf \
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf \
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--tls-cert-file=/etc/kubernetes/ssl/scheduler.pem \
--tls-private-key-file=/etc/kubernetes/ssl/scheduler-key.pem \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
#master-101执行:
#创建kube-scheduler service文件

[root@master-101 kubernetes]#cat > /lib/systemd/system/kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/cfg/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
#复制文件到其他master节点:
[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/scheduler.pem /etc/kubernetes/ssl/scheduler-key.pem /etc/kubernetes/scheduler.conf /etc/kubernetes/cfg/kube-scheduler.conf /lib/systemd/system/kube-scheduler.service 192.168.100.102:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/scheduler.pem /etc/kubernetes/ssl/scheduler-key.pem /etc/kubernetes/scheduler.conf /etc/kubernetes/cfg/kube-scheduler.conf /lib/systemd/system/kube-scheduler.service 192.168.100.103:/tmp

#master-102、master-103分别执行:
[root@master-103 ~]#cd /tmp/
[root@master-103 tmp]#mv scheduler*.pem /etc/kubernetes/ssl/
[root@master-103 tmp]#mv scheduler.conf /etc/kubernetes/scheduler.conf
[root@master-103 tmp]#mv kube-scheduler.conf /etc/kubernetes/cfg/kube-scheduler.conf
[root@master-103 tmp]#mv kube-scheduler.service /lib/systemd/system/kube-scheduler.service
#所有master节点:
#重新加载systemd和自启动kube-scheduler服务
[root@master-101 kubernetes]#systemctl daemon-reload
[root@master-101 kubernetes]#systemctl enable --now kube-scheduler.service
[root@master-101 kubernetes]#systemctl status kube-scheduler.service
5-6、部署kubelet
#master-101执行:
#读取BOOTSTRAP_TOKE并创建kubelet的kubeconfig配置文件kubelet.conf

#读取 bootstrap-token 值
[root@master-101 kubernetes]#BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/bootstrap-token.csv)

# 设置集群 也可采用 (https://wang.cluster.k8s:16443 ) 域名形式。
[root@master-101 kubernetes]#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://wang.cluster.k8s:16443 --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

# 设置认证
[root@master-101 kubernetes]#kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

# 设置上下文
[root@master-101 kubernetes]#kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

# 且换上下文
[root@master-101 kubernetes]#kubectl config use-context default --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig
#master-101执行:
# 角色授权
[root@master-101 kubernetes]#kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap
[root@master-101 kubernetes]#kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

# 授权 kubelet 创建 CSR
[root@master-101 kubernetes]#kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers

# 对 CSR 进行批复
# 允许 kubelet 请求并接收新的证书
[root@master-101 kubernetes]#kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers

# 允许 kubelet 对其客户端证书执行续期操作
[root@master-101 kubernetes]#kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:bootstrappers
#master-101执行:
#创建kubelet配置文件
[root@master-101 kubernetes]#cat > /etc/kubernetes/cfg/kubelet-config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: AlwaysAllow
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
#master-101执行:
#创建kubelet service启动文件:

[root@master-101 kubernetes]#mkdir -pv /var/lib/kubelet
[root@master-101 kubernetes]#cat > /lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
Wants=network-online.target
After=network-online.target
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --config=/etc/kubernetes/cfg/kubelet-config.yaml --cert-dir=/etc/kubernetes/ssl --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --root-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin/ --cni-conf-dir=/etc/cni/net.d --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --rotate-certificates --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 --image-pull-progress-deadline=15m --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --node-labels=node.kubernetes.io/node=''
StartLimitInterval=0
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
#master-101执行:
#复制文件到其他master节点:
[root@master-101 kubernetes]#mkdir -pv /var/lib/kubelet /etc/kubernetes/cfg
[root@master-101 kubernetes]#scp /etc/kubernetes/kubelet-bootstrap.kubeconfig /etc/kubernetes/cfg/kubelet-config.yaml /lib/systemd/system/kubelet.service 192.168.100.102:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kubelet-bootstrap.kubeconfig /etc/kubernetes/cfg/kubelet-config.yaml /lib/systemd/system/kubelet.service 192.168.100.103:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kubelet-bootstrap.kubeconfig /etc/kubernetes/cfg/kubelet-config.yaml /lib/systemd/system/kubelet.service 192.168.100.104:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kubelet-bootstrap.kubeconfig /etc/kubernetes/cfg/kubelet-config.yaml /lib/systemd/system/kubelet.service 192.168.100.105:/tmp

[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/ca.pem 192.168.100.104:/etc/kubernetes/ssl/ca.pem
[root@master-101 kubernetes]#scp /etc/kubernetes/ssl/ca.pem 192.168.100.105:/etc/kubernetes/ssl/ca.pem


#master-102、master-103、node-104、node-105执行:
[root@master-103 ~]#mkdir -pv /var/lib/kubelet /etc/kubernetes/cfg
[root@master-103 ~]#cd /tmp/
[root@master-103 tmp]#kubelet-bootstrap.kubeconfig /etc/kubernetes/kubelet-bootstrap.kubeconfig
[root@master-103 tmp]#kubelet-config.yaml /etc/kubernetes/cfg/kubelet-config.yaml
[root@master-103 tmp]#kubelet.service /lib/systemd/system/kubelet.service
#所有节点执行:
[root@master-101 kubernetes]#systemctl daemon-reload
[root@master-101 kubernetes]#systemctl enable --now kubelet.service
[root@master-101 kubernetes]#systemctl status kubelet.service
5-7、部署kube-proxy

描述:在集群中kube-proxy组件,负责节点上的网络规则使的您可以在集群内、集群外正确的与Pod进行网络通信,同时它也负责负载均衡,流量转发

#master-101执行:
#创建kube-proxy证书请求文件CSR并生成证书:

[root@master-101 k8s-init]#tee proxy-csr.json <<'EOF'
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "beijing",
"ST": "beijing",
"O": "system:kube-proxy",
"OU": "System"
}
]
}
EOF

#颁发kube-proxy证书文件
[root@master-101 k8s-init]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy
[root@master-101 k8s-init]#ls proxy*
[root@master-101 k8s-init]#cp proxy* /etc/kubernetes/ssl
#master-101执行:
#创建配置文件:

[root@master-101 k8s-init]#cd /etc/kubernetes/

# 设置集群 也可采用 (https://wang.cluster.k8s:16443 ) 域名形式。
[root@master-101 kubernetes]#kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.100.111:16443 --kubeconfig=kube-proxy.kubeconfig

# 设置认证
[root@master-101 kubernetes]#kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/proxy.pem --client-key=/etc/kubernetes/ssl/proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

# 设置上下文
[root@master-101 kubernetes]#kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

# 且换上下文
[root@master-101 kubernetes]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#master-101执行:
#创建kube-proxy配置文件:

[root@master-101 kubernetes]#cat > /etc/kubernetes/cfg/kube-proxy.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
healthzBindAddress: 0.0.0.0:10256
metricsBindAddress: 0.0.0.0:10249
hostnameOverride: __HOSTNAME__ #注意此处的“__HOSTNAME__”为了复制到其他节点替换使用
clusterCIDR: 10.128.0.0/16
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
mode: "ipvs"
ipvs:
excludeCIDRs:
- 10.128.0.1/32
EOF
#master-101执行:
#创建kube-proxy service文件:

[root@master-101 kubernetes]#mkdir -pv /var/lib/kube-proxy
[root@master-101 kubernetes]#cat > /lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/cfg/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#master-101执行:
#复制文件到其他master节点:
[root@master-101 kubernetes]#mkdir -pv /var/lib/kube-proxy
[root@master-101 kubernetes]#scp /etc/kubernetes/kube-proxy.kubeconfig /etc/kubernetes/ssl/proxy.pem /etc/kubernetes/ssl/proxy-key.pem /etc/kubernetes/cfg/kube-proxy.yaml /lib/systemd/system/kube-proxy.service 192.168.100.102:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kube-proxy.kubeconfig /etc/kubernetes/ssl/proxy.pem /etc/kubernetes/ssl/proxy-key.pem /etc/kubernetes/cfg/kube-proxy.yaml /lib/systemd/system/kube-proxy.service 192.168.100.103:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kube-proxy.kubeconfig /etc/kubernetes/ssl/proxy.pem /etc/kubernetes/ssl/proxy-key.pem /etc/kubernetes/cfg/kube-proxy.yaml /lib/systemd/system/kube-proxy.service 192.168.100.104:/tmp
[root@master-101 kubernetes]#scp /etc/kubernetes/kube-proxy.kubeconfig /etc/kubernetes/ssl/proxy.pem /etc/kubernetes/ssl/proxy-key.pem /etc/kubernetes/cfg/kube-proxy.yaml /lib/systemd/system/kube-proxy.service 192.168.100.105:/tmp

#master-102、master-103、node-104、node-105执行:
[root@master-103 ~]#mkdir -pv /var/lib/kube-proxy
[root@master-103 ~]#cd /tmp/
[root@master-103 tmp]#cp kube-proxy.kubeconfig /etc/kubernetes/kube-proxy.kubeconfig
[root@master-103 tmp]#cp proxy*.pem /etc/kubernetes/ssl/
[root@master-103 tmp]#cp kube-proxy.yaml /etc/kubernetes/cfg/kube-proxy.yaml
[root@master-103 tmp]#cp kube-proxy.service /lib/systemd/system/kube-proxy.service
#所有节点执行:

#修改kube-proxy.yaml中的“hostnameOverride”字段值为当前主机名
[root@master-101 kubernetes]# sed -i "s#__HOSTNAME__#$(hostname)#g" /etc/kubernetes/cfg/kube-proxy.yaml

[root@master-101 kubernetes]#systemctl daemon-reload
[root@master-101 kubernetes]#systemctl enable --now kube-proxy.service
[root@master-101 kubernetes]#systemctl status kube-proxy.service

标签:kube,kubernetes,二进制,--,v1.23,master,101,K8S,root
From: https://blog.51cto.com/dayu/5834961

相关文章

  • rancher跟k8s有那些不同
    rancher:1、采用图形化方式:易用的Web管理界面,在Docker易用性的基础上,再一次降低了使用容器技术部署容器应用的难度。2、支持多种调度器:通过环境模板,很容易地创建和部署Cattl......
  • 使用Rancher搭建K8S测试环境
    环境准备(4台主机,Ubuntu16.04+Docker1.12.6+SSH):rancher1192.168.3.160只做管理节点node1192.168.3.161K8S的节点1node2192.168.3.162K8S的节点2no......
  • Kubernetes K8S之Service服务详解与示例
    主机配置规划Service概述KubernetesService定义了这样一种抽象:逻辑上的一组Pod,一种可以访问它们的策略——通常被称为微服务。这一组Pod能够被Service访问到,通常是......
  • Feign在K8s中的使用
    之前在SpringCloud中使用过@FeignClient的方式对服务进行调用,感觉使用起来还是很方便的,所以想要探索一下是否可以把@FeignClient用在K8s集群中进行服务间的调用;feign是一个......
  • k8s服务发现和负载均衡
    概述:KubernetesService定义了这样一种抽象:一个Pod的逻辑分组,一种可以访问它们的策略——通常称为微服务。这一组Pod能够被Service访问到,通常是通过LabelSelecto......
  • ubuntu加入k8s
    一、安装docker所需的工具(安装最新版即可)apt-getupdateapt-getinstalldocker.io-y设置开机启动并启动dockersudosystemctlstartdockersudos......
  • rancher控制k8s
    前言:rancher主要可以管理和创建k8s集群并在rancher上面做操作,类似于k8s自带的控制面板能够监控集群但是功能有比面板多,详细专业的解释请看官网。......
  • k8s+log-pilot日志收集
    github地址:https://github.com/AliyunContainerService/log-pilot介绍log-pilot是一个很棒的docker日志工具。可以从dockerlog-pilot主机收集日志并将它们发送到您的......
  • 分析 k8s 容器 内存 CPU使用率
    分析k8s容器内存CPU使用率安装metrics-server参考githubhttps://github.com/kubernetes-sigs/metrics-server如下命令安装mkdirmetrics-serverwgethttps://git......
  • 前端灰度环境wayne+k8s部署
    前端灰度环境wayne+k8s部署一、灰度发布canay背景灰度发布是一种发布方式,也叫金丝雀发布,起源是矿工在下井之前会先放一只金丝雀到井里,如果金丝雀不叫了,就代表瓦斯浓......