首页 > 其他分享 >部署双主高可用K8S集群

部署双主高可用K8S集群

时间:2023-03-19 23:12:26浏览次数:34  
标签:name 双主高 -- master01 192.168 集群 docker K8S com

部署双主高可用K8S集群

原创 王思宇  收录于合集 #kubernetes8个 #服务器9个 #运维14个 #容器5个 #docker5个

高可用原理:利用keepalived 的vrrp协议创建vip,kubeadm init的时候用这个vip的6443端口来初始化 另外一台master直接以master角色加入集群

也可以继续增加中间件haproxy或者nginx做负载均衡

环境准备

软件和系统版本:

k8s 1.23.3
docker

20.10.21

linux centos7.8

服务器四台:

master01 192.168.197.130
master02 192.168.197.131
node1 192.168.197.132
node2 192.168.197.132

基础优化:略

安装

1.keepalived

两个master节点运行

yum install keepalived -y

master01配置

定义的vip是 192.168.197.200[root@master01 ~]# vim /etc/keepalived/keepalived.conf global_defs {   router_id k-master   notification_email {    # 邮件功能 可以不设置   [email protected]   }   notification_email_from [email protected]  # 邮件功能 可以不设置   smtp_server smtp.pinuc.com     # 邮件功能 可以不设置   smtp_connect_timeout 30        # 邮件功能 可以不设置   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0}
vrrp_script check_apiserver { script "/etc/keepalivedcheck-apiserver.sh"  # 健康检查脚本 interval 3                             # 检查次数 weight -51                             # 脚本返回不是0 当前权重-51
}
vrrp_instance VI-k-master { # 定义一个名称    state MASTER               # 当前是master 必须大写 interface ens33 # 你的网卡名称    virtual_router_id 51      # 设置一个id 自定义 主备必须一样不然脑裂    priority 100              # 权重 数值越大优先级越高 advert_int 3 # 通信检查间隔时间 authentication {        auth_type PASS        # 通信密文协议 目前有PASS和AH        auth_pass 1234        # 通信密码 } virtual_ipaddress {        192.168.197.200       # vip 如果只有一个网卡 就用当前ip段 } track_script {        check_apiserver       # 调用s汗啊改变定义的脚本 }

master02配置

[root@master02 keepalived]# vim keepalived.confglobal_defs {   router_id k-backup   notification_email {   [email protected]   }   notification_email_from [email protected]! Configuration File for keepalived
global_defs { router_id k-backup notification_email { [email protected] } notification_email_from [email protected] smtp_server smtp.pinuc.com smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0}
vrrp_script check_apiserver { script "/etc/keepalivedcheck-apiserver.sh" interval 3 weight -51
}
vrrp_instance VI-k-master { state BACKUP interface ens33 virtual_router_id 51 priority 50 advert_int 3 authentication { auth_type PASS auth_pass 1234 } virtual_ipaddress { 192.168.197.200 } track_script { check_apiserver }}

检查脚本

[root@master01 keepalived]# vim check-apiserver.sh#!/bin/basherrorExit(){    echo "$*" 1>&2    exit 1}curl --silent --max-time 2 --insecure localhost:6443/ -o /dev/null || errorExit "Error GET localhost:6443"if ip addr|grep -q 192.168.197.200; then    curl --silent --max-time 2 --insecure 192.168.197.200:6443/ -o /dev/null || errorExit "Error GET vip:6443"fi

启动服务

systemctl start keepalivedsystemctl enable keepalive

检查服务和vip是否正常

[root@master01 keepalived]# ip a|grep 197.200    inet 192.168.197.200/32 scope global ens33[root@master01 keepalived]# systemctl status keeplivedUnit keeplived.service could not be found.[root@master01 keepalived]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)   Active: active (running) since Thu 2022-11-17 15:20:43 CST; 9min ago  Process: 52236 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 52237 (keepalived)    Tasks: 3   Memory: 2.7M   CGroup: /system.slice/keepalived.service           ├─52237 /usr/sbin/keepalived -D           ├─52238 /usr/sbin/keepalived -D           └─52239 /usr/sbin/keepalived -D
Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: VRRP_Instance(VI-k-master) Sending/queueing gratuitous ARPs on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200

以上说明服务正常

2.k8s

四台机器的hosts

/etc/hosts192.168.197.130 master01192.168.197.131 master02192.168.197.132 node01192.168.197.133 node02

两个master节点运行脚本

[root@master01 ~]# cat install_docker_k8s.sh #!/bin/bashcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOF
wget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubectl-1.23.3 kubelet-1.23.3 docker-ce-20.10.12 -y cp /etc/docker/daemon.json{,.bak$(date +%F)}cat <<EOF >>/etc/docker/daemon.json{    "exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns
cat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubelet
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a

两个node节点运行(node节点只是不装kubectl  其他的都一样 另外也可以用master的那些镜像 下载镜像那步跳过,直接用docker save ,scp ,docker load)

[root@node01 ~]# cat install_k8s_node.sh #!/bin/bashcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOF
wget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubelet-1.23.3 -yyum install docker-ce-20.10.12 -y
cp /etc/docker/daemon.json{,.bak_$(date +%F)}
cat <<EOF >/etc/docker/daemon.json{    "exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$corednscat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubeletecho 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a

3.初始化集群

  apiVersion: kubeadm.k8s.io/v1beta2  bootstrapTokens:  - groups:    - system:bootstrappers:kubeadm:default-node-token    token: abcdef.0123456789abcdef    ttl: 24h0m0s    usages:    - signing    - authentication  kind: InitConfiguration  localAPIEndpoint:    advertiseAddress: 192.168.197.200     #VIP的地址    bindPort:  6443  nodeRegistration:    criSocket: /var/run/dockershim.sock    name: master01    taints:    - effect: NoSchedule    key: node-role.kubernetes.io/master  ---  apiServer:            #添加如下两行信息    certSANs:    - "192.168.197.200"         #VIP地址,当只有一个master时,不需要添加    timeoutForControlPlane: 4m0s  apiVersion: kubeadm.k8s.io/v1beta2  certificatesDir: /etc/kubernetes/pki  clusterName: kubernetes  controllerManager: {}  dns:    type: CoreDNS  etcd:    local:    dataDir: /var/lib/etcd  imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     controlPlaneEndpoint: "192.168.197.200:6443"    #VIP的地址和端口  kind: ClusterConfiguration  kubernetesVersion: v1.23.3        #kubernetes版本号  networking:    dnsDomain: cluster.local    serviceSubnet: 10.96.0.0/12       #选择默认    podSubnet: 10.244.0.0/16        #添加pod网段  scheduler: {}

把上边的VIP 改成你的

主节点运行

kubeadm init --config kubeadm-init.yaml --upload-certs
保存重要信息You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \ --control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4

上边有两个join 意思是 如果你的master角色 就用上边的 否则用下边的

另外一台master执行

  kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \  --control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735

两个node执行

kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4

4.部署网络

只在master01上安装即可

网络插件用flannel 我从github上拿的 直接copy即可

[root@master01 ~]# cat flannel.yaml kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelrules:- apiGroups:  - ""  resources:  - pods  verbs:  - get- apiGroups:  - ""  resources:  - nodes  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes/status  verbs:  - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:            - matchExpressions:              - key: kubernetes.io/os                operator: In                values:                - linux      hostNetwork: true      priorityClassName: system-node-critical      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni-plugin        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1        command:        - cp        args:        - -f        - /flannel        - /opt/cni/bin/flannel        volumeMounts:        - name: cni-plugin          mountPath: /opt/cni/bin      - name: install-cni        image: rancher/mirrored-flannelcni-flannel:v0.16.1        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: rancher/mirrored-flannelcni-flannel:v0.16.1        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN", "NET_RAW"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        - name: EVENT_QUEUE_DEPTH          value: "5000"        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/        - name: xtables-lock          mountPath: /run/xtables.lock      volumes:      - name: run        hostPath:          path: /run/flannel      - name: cni-plugin        hostPath:          path: /opt/cni/bin      - name: cni        hostPath:          path: /etc/cni/net.d      - name: flannel-cfg        configMap:          name: kube-flannel-cfg      - name: xtables-lock        hostPath:          path: /run/xtables.lock          type: FileOrCreate
kubectl apply -f flannel.yaml

如果下载镜像太慢 没关系 我打包好了分享给你们

链接:https://pan.baidu.com/s/1Hb-DU5gAKHfkVDbTOde0nQ 提取码:1212

使用方法 下载下来之后 rz到服务器

docker load <cni.tar

docker load <cni-flannel.tar

PS:每个节点都需要有网络 所以都执行 !

然后在master01上查看

[root@master01 ~]# kubectl get nodesNAME       STATUS   ROLES                  AGE    VERSIONmaster01   Ready    control-plane,master   163m   v1.23.3master02   Ready    control-plane,master   161m   v1.23.3node01     Ready    <none>                 156m   v1.23.3node02     Ready    <none>                 159m   v1.23.3
[root@master01 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-65c54cc984-6cwrx 1/1 Running 0 171mkube-system coredns-65c54cc984-bb5fn 1/1 Running 0 171mkube-system etcd-master01 1/1 Running 0 171mkube-system etcd-master02 1/1 Running 0 170mkube-system kube-apiserver-master01 1/1 Running 0 171mkube-system kube-apiserver-master02 1/1 Running 0 170mkube-system         kube-controller-manager-master01   1/1     Running            0             171mkube-system kube-controller-manager-master02 1/1 Running 0 170mkube-system kube-proxy-7xkc2 1/1 Running 0 171mkube-system kube-proxy-bk82v 1/1 Running 0 165mkube-system kube-proxy-kvf2p 1/1 Running 0 167mkube-system kube-proxy-wwlk5 1/1 Running 0 170mkube-system kube-scheduler-master01 1/1 Running 0 171mkube-system kube-scheduler-master02 1/1 Running 0 170m

 

之后可视化 可以用rancher kubesphere kubernetes-dashboard 你们自己选择吧

标签:name,双主高,--,master01,192.168,集群,docker,K8S,com
From: https://www.cnblogs.com/yaoyangding/p/17234736.html

相关文章

  • Eureka集群搭建
    平时学习的时候单机就可以用了,但是到了生产环境就不行了。为了提高Eureka的高可用就必须搭建集群了。Eureka集群有两种。一种是每个EurekaServer之间不同步服务列表(保存了......
  • nacos集群搭建
    Nacos集群搭建1.集群结构图官方给出的Nacos集群图:其中包含3个nacos节点,然后一个负载均衡器代理3个Nacos。这里负载均衡器可以使用nginx。我们计划的集群结构:三个nac......
  • 浅谈云原生基础入坑与docker 搭建redis-cluster集群
    浅谈云原生基础入坑与docker搭建redis-cluster集群开篇来点自己的小感触:自从走上后端开发这条无法回头的互卷道路以后,在视野内可见新的技术在迭代,更新的技术在不断发行。......
  • 第一章 1.1.1节 Kubeadm安装K8S高可用集群
    1.1安装前必读请不要使用带中文的服务器和克隆的虚拟机。生产环境建议使用二进制的方式安装。文档中的IP地址要更换成自己的IP地址,要谨记!!!1.2基本环境配置kubeadm安......
  • K8S 证书到期解决方法
    前戏登录测试环境查看pod时保持如下内容Unabletoconnecttotheserver:x509:certificatehasexpiredorisnotyetvalid:currenttime2023-03-16T23:18:09+08:0......
  • 基于K8S搭建Ceph分部署存储
    基于K8S搭建Ceph分部署存储 2020年2月17日 技术、推荐博文唐玥璨7条留言版本依赖​    搭建思路很多高级运维人员都2020年了对于K8S的存储大部分都是采用主机......
  • hadoop集群运行
    无密码登录[hadoop@masterhadoop]$rpm-qa|grepopensshopenssh-server-7.4p1-16.el7.x86_64openssh-clients-7.4p1-16.el7.x86_64openssh-7.4p1-16.el7.x86_64[h......
  • Redis主从复制、哨兵、集群
    首先,我们提出一个问题,Redis做缓存性能这么好,如果挂了怎么办?因此,我们提出来的第一个解决方案就是主从复制原则一、主从复制什么是主从复制:是指将一台Redis服务器的数据,......
  • 15、K8S资源对象&资源清单
    1、资源对象基本属性介绍1.1、资源对象学习完成Kubernetes集群中的基本架构角色,那么不能不提的集群实现的核心:资源对象。那么在Kubernetes集群中,这些资源对象是如何产......
  • docker swarm 搭建ES集群(TLS版)
    ES集群如果想要开启密码访问,则需要开启集群的TLS功能所以在dockerswarm搭建ES集群的基础上增加TLS版的ES集群docker-compose文件准备docker-compose-es-cluster-tl......