首页 > 其他分享 >k8s集群二进制安装部署

k8s集群二进制安装部署

时间:2023-11-22 16:12:05浏览次数:45  
标签:https 二进制 192.168 etc 集群 etcd -- k8s

1、前期规划

主机规划

IP地址 主机名 主机角色 软件列表
192.168.16.129 k8s-master01 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、haproxy、keepalived
192.168.16.130 k8s-master02 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet、haproxy、keepalived
192.168.16.131 k8s-master03 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd、kubelet
192.168.16.132 k8s-node1 node kubelet、kube-proxy

软件版本

软件名称 版本 备注
centos7 kernel:6.6
kubernetes v1.21.10
etcd v3.5.2
calico v3.19.4
coredns v1.8.4
docker 20.10.13 yum安装
haproxy 5.18 yum安装
keepalived 3.5 yum安装

网络地址规划

网络名称 网段 备注
Node网络 192.168.16.0/24
Service网络 10.96.0.0/16
Pod网络 10.244.0.0/16
2、所有主机通用配置

设置主机名和hosts文件解析

# cat /etc/hosts
192.168.150.184 k8s-master1
192.168.150.185 k8s-master2
192.168.150.186 k8s-master3
192.168.150.187 k8s-node1

关闭防火墙、Selinux、swap分区

设置时间同步

limit设置

# vim /etc/security/limits.conf
*    soft     nofile     655360
*    hard     nofile     131072
*    soft     nproc      655350
*    hard     nproc      655350
*    soft     memlock    unlimited
*    hard     memlock    unlimited

安装ipvs管理模块,并配置

# yum install ipvsadm ipset syssyay conntrack libseccomp -y
# modprobe -- ip_vs
# modprobe -- ip_vs_rr
# modprobe -- ip_vs_wrr
# modprobe -- ip_vs_sh
# modprobe -- nf_conntrack

# cat >/etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
# systemctl enable --now systemd-modules-load
# systemctl restart systemd-modules-load

内核升级

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# yum install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
# yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
# grub2-set-default 0
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

安装工具

# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git lrzsz -y
3、高可用配置

在master01和master02主机安装haproxy和keepalived

haproxy配置

# yum install haproxy keepalived -y
# cat /etc/haproxy/haproxy.cfg 
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
        maxconn 2000
        ulimit-n 16384
        log 127.0.0.1 local0 err

defaults
        log global
        mode http
        option httplog
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        timeout http-request 15s
        timeout http-keep-alive 15s



frontend monitor-in
        bind 0.0.0.0:33305
        mode http
        option httplog
        monitor-uri /monitor

frontend k8s-master
        bind 0.0.0.0:16443
        bind 127.0.0.1:16443
        mode tcp
        option tcplog
        tcp-request inspect-delay 5s
        default_backend k8s-master

backend k8s-master
        mode tcp
        option tcplog
        option tcp-check
        balance roundrobin
        default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
        server master01 192.168.16.129:6443 check
        server master02 192.168.16.130:6443 check
        server master03 192.168.16.131:6443 check

# systemctl enable --now haproxy

浏览器访问验证 http://192.168.16.130:33305/monitor

keepalived配置

# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   weight -5
   fall 2
   rise 1
}

vrrp_instance VI_1 {
    state MASTER                        ## 备机设置 BACKUP
    interface ens33
    mcast_src_ip 192.168.16.129         ## 备机设置自己的IP地址192.168.150.176
    virtual_router_id 51
    priority 101                        ## 备机设置优先级 99
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass abc123
    }
    virtual_ipaddress {
        192.168.16.250
    }
    track_script {
        chk_apiserver
    }
}
# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
# systemctl enable --now keepalived

配置主机ssh免密连接

4、使用cfssl工具创建证书

获取cfssl工具,实现正式签发的工具

# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 --no-check-certificate
# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 --no-check-certificate
# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 --no-check-certificate
# chmod +x cfssl*
# mv cfssl_linux-amd64 /usr/bin/cfssl
# mv cfssljson_linux-amd64 /usr/bin/cfssljson
# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

创建CA证书

# cat ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "kubemsb",
      "OU": "CN"
    }
  ],
  "ca": {
      "expiry": "87600h"
  }
}

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2023/11/13 11:00:31 [INFO] generating a new CA key and certificate from CSR
2023/11/13 11:00:31 [INFO] generate received request
2023/11/13 11:00:31 [INFO] received CSR
2023/11/13 11:00:31 [INFO] generating key: rsa-2048
2023/11/13 11:00:31 [INFO] encoded CSR
2023/11/13 11:00:31 [INFO] signed certificate with serial number 574209306477940501530924598323722273337915651468

## 配置ca证书策略
# cat ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
5、etcd集群安装

生成etcd证书

# cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.150.184",
    "192.168.150.185",
    "192.168.150.186"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "kubemsb",
      "OU": "CN"
    }
  ]
}

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

etcd集群部署

# wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
# tar -zxvf etcd-v3.5.2-linux-amd64.tar.gz
# cp -p etcd-v3.5.2-linux-amd64/etcd* /usr/bin/
# mkdir /etc/etcd
# scp /usr/bin/etcd* k8s-master02:/usr/bin/
# scp /usr/bin/etcd* k8s-master03:/usr/bin/
# vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.16.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.16.129:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.16.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.16.129:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.16.129:2380,etcd2=https://192.168.16.130:2380,etcd3=https://192.168.16.131:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"


# mkdir -p /etc/etcd/ssl
# mkdir  -p /var/lib/etcd/default.etcd
# cp ca*.pem /etc/etcd/ssl/
# cp etcd*.pem /etc/etcd/ssl/
# vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --client-cert-auth \
  --peer-client-cert-auth
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

## 注意拷贝的etcd.conf文件修改IP地址和节点名
# scp /etc/etcd/etcd.conf k8s-master02:/etc/etcd/
# scp /etc/etcd/etcd.conf k8s-master03:/etc/etcd/
# scp /etc/etcd/ssl/* k8s-master02:/etc/etcd/ssl/
# scp /etc/etcd/ssl/* k8s-master03:/etc/etcd/ssl/
# scp /etc/systemd/system/etcd.service k8s-master02:/etc/systemd/system/
# scp /etc/systemd/system/etcd.service k8s-master03:/etc/systemd/system/
## 三台主机都需启动
# systemctl daemon-reload
# systemctl enable --now etcd
# systemctl status etcd

# ETCDCTL_API=3 /usr/bin/etcdctl --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://192.168.16.129:2379,https://192.168.16.130:2379,https://192.168.16.131:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.16.129:2379 |   true |  9.964038ms |       |
| https://192.168.16.131:2379 |   true | 10.207664ms |       |
| https://192.168.16.130:2379 |   true | 11.264541ms |       |
+-----------------------------+--------+-------------+-------+

标签:https,二进制,192.168,etc,集群,etcd,--,k8s
From: https://www.cnblogs.com/zbc230/p/17849562.html

相关文章

  • 关于K8S亲和性的解释
    kubernetes提供了一种亲和性调度(Affinity)。它在NodeSelector的基础之上的进行了扩展,可以通过配置的形式,实现优先选择满足条件的Node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。Affinity主要分为三类:nodeAffinity(node亲和性):以node为目标,解决pod可以调......
  • 手工创建Redis 集群
    官方的工具redis-trib.rb需要使用ruby,在kylin上不好安装。所以需要手工配置rediscluster。1.使用配置文件启动redis按照ip后缀拷贝到机器(90-92)上,每个机器上启动两个示例,分别使用6379和6380端口。redis启动命令:./redis-serverredis.conf启动失败处理:错误提示:1593407:M07Jul......
  • Redis集群的实例什么情况使用redis集群和哨兵
    当考虑Redis集群和哨兵的使用时,我们可以考虑一个在线购物系统的场景,其中需要处理用户会话数据。这个例子将涵盖横向扩展、高可用性和故障处理的方面。场景描述:假设你的在线购物系统使用Redis存储用户会话数据,以提供个性化的购物体验。用户的会话数据包括购物车、用户偏好设置等......
  • strimzi operator 部署kafka集群
    环境说明本环境使用了单节点、临时存储集群的kafka-ephemeral-single配置。线上环境推荐kafka-persistent.yaml配置并修改storage配置为自动创建pv/pvc类型。配置清单说明1.kafka-ephemeral-single.yaml:非持久化存储,单节点集群;2.kafka-ephemeral.yaml:非持久化存储,多节点集群......
  • centos7.9 部署FastDFS+Nginx本地搭建文件服务器 高性能的文件服务器集群 同时实现在
    前言FastDFS是一个开源的轻量级分布式文件系统,它对文件进行管理,功能包括:文件存储、文件同步、文件访问(文件上传、文件下载)等,解决了大容量存储和负载均衡的问题。特别适合以文件为载体的在线服务,如相册网站、视频网站等等。FastDFS为互联网量身定制,充分考虑了冗余备份、负载均衡、线......
  • k8s解析kubeconfig的两种常用方式
    k8sv1.19.0方法1staging/src/k8s.io/client-go/tools/clientcmd/client_config.goBuildConfigFromFlags函数根据本地kubeconfig文件路径来生成restclient.Config对象。staging/src/k8s.io/client-go/tools/clientcmd/loader.goLoad方法读取指定目录下多个文件内容并合并,转换......
  • 分布式事务 Seata 集群搭建
    Seata是蚂蚁金服和阿里巴巴共同开源的一款分布式事务项目,致力于在微服务架构下提供高性能和简单易用的分布式事务解决方案。自诞生以来就备受国内开发人员推崇,在实际工作中使用者甚多。Seata提供了四种不同的分布式事务解决方案:XA模式:强一致性分阶段事务模式,牺牲了一定的可用......
  • k8s之istio
    .Istio介绍Istio是一个开源的服务网格(ServiceMesh),为Kubernetes和其他平台上的微服务架构提供了一种统一的、灵活的网络通信和管理方式。具有服务发现、负载均衡、流量管理、故障恢复和安全性等功能。以下是Istio的一些基本特性:代理注入:Istio使用Envoy作为其数据面代理,通过注入......
  • k8s自动伸缩应用
    原文:https://zhuanlan.zhihu.com/p/649662103背景:这篇文章主要讲的是kuberntes的自动伸缩pods的能力。讲述如何使用HorizontalPodAutoscaler(HPA)来实现自动伸缩应用。使用一个负载生成器来模拟服务负载高的情形。一个HPA对象用来监控pods的资源使用情况,通过对比实际的资源情......
  • Kubernetes Gateway API 攻略:解锁集群流量服务新维度!
    KubernetesGatewayAPI刚刚GA,旨在改进将集群服务暴露给外部的过程。这其中包括一套更标准、更强大的API资源,用于管理已暴露的服务。在这篇文章中,我将介绍GatewayAPI资源,并以Istio为例来展示这些资源是如何关联的。通过这个示例,你将了解GatewayAPI的各个组成部分如何配......