首页 > 其他分享 >kubeadm一主两从扩容到三主两从->失败结束

kubeadm一主两从扩容到三主两从->失败结束

时间:2023-07-28 21:34:39浏览次数:40  
标签:三主 pki kubernetes etc master etcd 一主 kubeadm k8s

需求:kubeadm一主两从扩容到三主两从
参考:https://mp.weixin.qq.com/s?__biz=MzAxOTc3Mjk1Ng==&mid=2247485240&idx=1&sn=89c1e1aa4988ee4d1f2c134cdcf9c40b&chksm=9bc0a44bacb72d5d48f7f5b2d50edc3a9eb13bb10e554e9b5401143010e294f7634c93b8a795&scene=21#wechat_redirect
需要提前安装好docker和安装kubeadm
参考:https://www.cnblogs.com/sunnyyangwang/p/17516129.html
将docker中已存镜像打包成.tar.gz文件: docker save -o coredns:1.3.1.tar.gz k8s.gcr.io/coredns:1.3.1
载入.tar.gz文件成镜像:docker load -i etcd:3.3.10.tar.gz

源集群节点信息:
[root@k8s-master ~]# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 18d v1.14.3 192.168.1.203 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://17.3.3
k8s-node1 Ready <none> 18d v1.14.3 192.168.1.202 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://17.3.3
k8s-node2 NotReady <none> 18d v1.14.3 192.168.1.201 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://17.3.3
开始操作
1、载入镜像
上述前提条件都已经安装完毕,每个节点都需要执行
为节省时间,直接导入master之前下载的镜像。
[root@k8s-mas2 images]# docker load -i etcd:3.3.10.tar.gz
[root@k8s-mas2 images]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 4 years ago 258 MB
依次载入master需要的镜像信息(默认6个)

2、把master1节点的证书拷贝到master2和master3上
(1)在master2和master3节点上创建证书存放目录
[root@k8s-mas2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
(2)在master1节点把证书拷贝到master2和master3上
在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错:
[root@k8s-master ~]# ssh-keygen -t rsa
[root@k8s-master ~]# ssh-copy-id k8s-mas2
scp /etc/kubernetes/pki/ca.crt k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt k8s-mas2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key k8s-mas2:/etc/kubernetes/pki/etcd/
证书拷贝之后在master2和master3上执行如下命令,
这样就可以把master2和master3加入到集群
由于kubeadm首次生成的token有效期只有24小时,需要重新创建token。
a、master节点生成token
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.203:6443 --token nr98xy.5fqtj72ec6sdosnr --discovery-token-ca-cert-hash sha256:63c950dce2c70dbd7b7db5299d807b58656bb2987cdef752b23499fc2d9e704a
b、登录扩容的节点,执行上述命令。
上面master执行输出的直接复制在node节点上执行。
[root@k8s-mas2 ~]# kubeadm join 192.168.1.203:6443 --token nr98xy.5fqtj72ec6sdosnr --discovery-token-ca-cert-hash sha256:63c950dce2c70dbd7b7db5299d807b58656bb2987cdef752b23499fc2d9e704a
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster
c、验证
第一个master节点查看,新节点添加成功
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-mas2 Ready <none> 2m29s v1.14.3
k8s-master Ready master 18d v1.14.3
k8s-node1 Ready <none> 18d v1.14.3
k8s-node2 NotReady <none> 18d v1.14.3
如上,默认加进来的是node节点,
分析,前期在初始化的时候就指定了apiserver地址,如果改动需要初始化服务了。相当于重建了,不建议这么改动。(生成环境看规模,一般3、5个master节点即可)
不妨测试重建一下。
1、初始化配置修改
[root@k8s-master ~]# cat kubeadm_thr.yaml

apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.203
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
  - 192.168.1.203
  - 192.168.1.71
  - 192.168.1.70
  - 192.168.1.72
  - 192.168.1.77
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.1.203:6443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:       ## local-->external
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io      ## k8s.gcr.io-->registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

备注:原始master203,现在新增70-71两个master,其中72位备用,77为master高可用vip使用。
2、查看需要的镜像
[root@k8s-master ~]# kubeadm config images list --config kubeadm_thr.yaml
k8s.gcr.io/kube-apiserver:v1.14.3
k8s.gcr.io/kube-controller-manager:v1.14.3
k8s.gcr.io/kube-scheduler:v1.14.3
k8s.gcr.io/kube-proxy:v1.14.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[root@k8s-master ~]# kubeadm reset
[root@k8s-master ~]# kubeadm init --config kubeadm_thr.yaml
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖"/root/.kube/config"? y
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 100s v1.14.3
坑货,每个节点都重新加入
[root@k8s-node1 ~]# kubeadm reset
[root@k8s-node1 ~]# kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c570b9e9e3db763d81f05e80d15ae48c130ed0595f48e8809aa8fe1f1d859957
这步可能会一直卡住,我通过重启机器后重新执行成功。
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 12m v1.14.3
k8s-node1 NotReady <none> 49s v1.14.3
安装好flannel、kube-proxy组件将pod运行起来。
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 3h15m v1.14.3
k8s-node1 Ready <none> 3h4m v1.14.3
配置第二个master节点
同上,kubeadm reset,然后创建配置文件
[root@k8s-mas2 ~]# kubeadm reset
[root@k8s-mas2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
拷贝文件到第二个master节点
scp /etc/kubernetes/pki/ca.crt k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key k8s-mas2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt k8s-mas2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key k8s-mas2:/etc/kubernetes/pki/etcd/
[root@k8s-mas2 ~]# kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c570b9e9e3db763d81f05e80d15ae48c130ed0595f48e8809aa8fe1f1d859957 --experimental-control-plane

注:上面面的这个加入到k8s节点的一串命令kubeadm join就是在初始化的时候生成的。
      --experimental-control-plane:这个参数表示加入到k8s集群的是master节点。
如上,加入集群如下报错,提示etcd不是集群状态。

[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available
现在需要把etcd做成集群模式,这个就复杂了。
如上,单节点扩容可行性不高。
=========
遗留问题
1、kubeadm部署三主两从模式
需要:尝试扩容三主三从试试;
尝试扩成四个主试试;

2、二进制方式部署集群?

标签:三主,pki,kubernetes,etc,master,etcd,一主,kubeadm,k8s
From: https://www.cnblogs.com/sunnyyangwang/p/17588934.html

相关文章

  • mongodb一主一从添加一个从节点
    环境:OS:Centos7mongodb:4.4.22拓扑结构:192.168.1.107primary192.168.1.104secondary新增节点:192.168.1.108secondary 1.新增的节点安装mongodb目录结构保持与现有的一致,同时将密钥拷贝到新节点的相应目录下mongo.cnf配置文件可以拷贝主库的过来,然后修改相应的ip即可......
  • docker 安装redis 6.0.8哨兵集群(一主两从三哨兵)
    准备三台主机并且安装了docker192.168.31.132192.168.31.134192.168.31.144linux版redis6.0.8下载下载地址:https://download.redis.io/releases/干啥用:拷贝出redis.conf文件,在此文件里配置主从关系,最好不要使用不同版本的配置文件,防止出现配置文件的参数不兼容问题安......
  • 03-kubeadm初始化Kubernetes集群
    集群部署架构规划:节点网络:192.168.1.0/24Service网络:10.96.0.0/12Pod网络:10.244.0.0/16  部署方法参考:https://github.com/kuberneteskop方式:AWS(AmazonWebServices)andGCE(GoogleCloudPlatform)arecurrentlyofficiallysupportedkubeadm方式:https://github.com......
  • docker 部署 MySQL 一主多从
    服务器规划:使用docker方式创建,主从服务器IP一致,端口号不一致,密码都设置为123456主服务器:容器名mysql-master,端口3306从服务器:容器名mysql-slave1,端口3307从服务器:容器名mysql-slave2,端口33081、部署主服务器dockerrun-d\-p3306:3306\-v/xk857/mysql/master/conf:/......
  • redis 用docker集群部署:三主三从
    要在CentOS7上使用Docker部署3个主Redis节点和3个从Redis节点,你可以按照以下步骤进行操作:安装Docker:首先,确保已经安装了Docker。可以执行以下命令来检查是否已安装Docker:dockerversion如果未安装Docker,请根据Docker官方文档的指导进行安装。创建网络:创建一个自定义的Docker网......
  • kubeadm在单master k8s集群中添加新节点
     服务器信息master110.38.0.50master210.38.0.58master310.38.0.166node110.38.0.77lb110.38.0.182lb210.38.0.18vip 10.38.0.144 1.安装及配置nginx+keepalived需要安装nginx(haproxy)+keepalived为apiserver提供高可用master的vip。......
  • 使用kubeadm快速部署一个K8s集群
    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。这个工具能通过两条指令完成一个kubernetes集群的部署:#创建一个Master节点$kubeadminit#将一个Node节点加入到当前集群中$kubeadmjoin<Master节点的IP和端口>1.安装要求在开始之前,部署Kubern......
  • kubeadm搭建单master多node的k8s集群
    一、实验环境准备镜像选择:CentOS-7-x86_64-DVD-2009.iso配置:4核、6G内存、80G硬盘兼容性:ESXI7.0及更高版本服务器信息:k8s集群角色ip主机名安装的组件控制节点10.104.26.192hqs-masterapiserver、controller-manager、scheduler、etcd、kube-proxy、docker、ca......
  • 10. docker方式下的mysql设置主从复制(一主两从)
    上一篇【centos使用docker方式安装mysql】笔记中,我们在三个虚拟机中使用docker方式新建了三个mysql容器服务,那么我们这篇文章来记录下,如何在这三台机器中设置mysql的主从复制功能。其中111服务器作为主节点,112和113两个服务器作为两个从节点,复制111服务器mysq......
  • k8s实战案例之基于StatefulSet控制器运行MySQL一主多从
    1、前言Pod调度运⾏时,如果应⽤不需要任何稳定的标示、有序的部署、删除和扩展,则应该使⽤⼀组⽆状态副本的控制器来部署应⽤,例如Deployment或ReplicaSet更适合⽆状态服务需求,⽽StatefulSet适合管理所有有状态的服务,⽐如MySQL、MongoDB集群等。2、StatefulSet控制器运行MySQL一......