首页 > 系统相关 >Centos7二进制部署k8s

Centos7二进制部署k8s

时间:2024-10-22 14:50:44浏览次数:3  
标签:opt kubernetes 二进制 Centos7 -- etcd k8s root

前言:确保自己的配置正确,设计到很多文件相关配置导致不正确的就复制配置给ai编辑顺序在运行,报错有可能顺序出了问题,或者通过查看日志来观察哪里报错)

一、部署etcd集群

三台机器 , 所有机器相互做解析 centos7.4 关闭防⽕墙和 selinux

#sudo systemctl stop firewalld
#sudo setenforce 0
或者:vim /etc/selinux/config

SELINUX=enforcing
将其更改为:
SELINUX=disabled

[root@master ~]# vim /etc/hosts

192.168.2.100 k8s-master

192.168.2.101 k8s-node1

192.168.2.102 k8s-node2

(我的主机名没有k8s,为了方便我文章写了k8s)

1.1 下载cfssl工具:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.2 生成Etcd证书  创建三个文件:

[root@k8s-master1 ~]# mkdir cert

[root@k8s-master1 ~]# cd cert/

[root@k8s-master1 cert]# vim ca-config.json # ⽣成 ca 中⼼的

[root@k8s-master1 cert]# cat ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

 [root@k8s-master1 cert]# vim ca-csr.json # ⽣成 ca 中⼼的证书请求⽂件

[root@k8s-master1 cert]# cat ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
        }
    ]
}

[root@k8s-master1 cert]# vim server-csr.json # ⽣成服务器的证书请求⽂件

[root@k8s-master1 cert]# cat server-csr.json

{
    "CN": "etcd",
    "hosts": [
    "192.168.2.100",
    "192.168.2.101",
    "192.168.2.102"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

⽣成证书: [root@k8s-master1 cert]#

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

[root@k8s-master1 cert]# 

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

 

 1.3 部署Etcd
二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.3.10

 以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:(我是从官网下的然后文件直接导过去,也可以用wget直接下,我用的是3.3.10版本)

# wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3. 2.12-linux-amd64.tar.gz


解压二进制包:(以下操作三台机器都要操作)

[root@k8s-master]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@k8s-master]# ls
etcd-v3.3.10-linux-amd64.tar.gz       kubernetes-node-linux-amd64.tar.gz
flannel-v0.10.0-linux-amd64.tar.gz    kubernetes-server-linux-amd64.tar.gz
kubernetes-client-linux-amd64.tar.gz
[root@k8s-master]# tar zxf etcd-v3.3.10-linux-amd64.tar.gz 
[root@k8s-master]# mv etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
[root@k8s-master]# ls /opt/etcd/bin/
etcd  etcdctl

创建etcd配置文件

# vim /opt/etcd/cfg/etcd

   细心谨慎

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.100:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.100:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.100:2380,etcd02=https://192.168.2.101:2380,etcd03=https://192.168.2.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"



参数解释: * ETCD_NAME 节点名称 , 每个节点名称不⼀样

* ETCD_DATA_DIR 存储数据⽬录 ( 他是⼀个数据库,不是存在内存的,存在硬盘中的,所有和 k8s 有关的信息都会存到 etcd ⾥⾯的 )

* ETCD_LISTEN_PEER_URLS 集群通信监听地址

* ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址 * ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址

* ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址

* ETCD_INITIAL_CLUSTER 集群节点地址

* ETCD_INITIAL_CLUSTER_TOKEN 集群 Token

* ETCD_INITIAL_CLUSTER_STATE 加⼊集群的当前状态, new 是新集群, existing 表示加⼊ 已有集群

systemd管理etcd:

vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

把刚才⽣成的证书拷⻉到配置⽂件中的位置: ( 将 master 上⾯⽣成的证书 scp 到剩余两台机器上⾯ )

 # cd /root/cert/
 # cp ca*pem server*pem /opt/etcd/ssl

直接拷⻉到剩余两台etcd机器:
[root@k8s-master cert]# scp ca*pem server*pem k8s-node1:/opt/etcd/ssl
[root@k8s-master cert]# scp ca*pem server*pem k8s-node2:/opt/etcd/ssl

全部启动并设置开启启动:
# systemctl daemon-reload
# systemctl start etcd
# systemctl enable etcd

 都部署完成后,三台机器都检查 etcd 集群状态:(建议先启动node节点)

/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.2.100:2379,https://192.168.2.101:2379,https://192.168.2.102:2379" \
cluster-health

 

如果输出上⾯信息,就说明集群部署成功。

如果有问题第⼀步先看⽇志: /var/log/messages 或 journalctl -u etcd 报错点,然后交给ai;也有可能你命令没做对,可以把命令发个ai让它帮你排序;

1.4 在Node节点安装Docker

# yum remove docker \
 docker-client \
 docker-client-latest \
 docker-common \
 docker-latest \
 docker-latest-logrotate \
 docker-logrotate \
 docker-selinux \
 docker-engine-selinux \
 docker-engine
yum remove $(ps aux | grep dockerd)    #一键删除docker服务,得先systemctl stop docekr才行
 # yum install -y yum-utils device-mapper-persistent-data lvm2 git
 # yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linu
 x/centos/docker-ce.repo
 # yum install docker-ce -y

启动设置开机⾃启

# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io   #配置加速器

二、部署Flannel网络插件

1、Flannel 要⽤ etcd 存储⾃身⼀个⼦⽹信息,所以要保证能成功连接 Etcd ,写⼊预定义⼦⽹段: 2、在 node 节点部署,如果没有在 master 部署应⽤,那就不要在 master 部署 flannel ,他是⽤来给所有 的容器⽤来通信的。

[root@k8s-master ~]# scp -r cert/ k8s-node1:/root/
#将⽣成的证书copy到剩下的机器上⾯
[root@k8s-master ~]# scp -r cert/ k8s-node2:/root/
[root@k8s-master ~]# cd cert/
#因为要在etcd证书目录下执行,生成证书
/opt/etcd/bin/etcdctl \
--ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.2.100:2379,https://192.168.2.101:2379,https://192.168.2.102:2379" \
set /coreos.com/network/config '{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

# 注:以下部署步骤在规划的每个 node 节点都操作。

2.1 下载二进制包:

# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

2.2 配置Flannel: (在node节点上部署)

# mkdir -p /opt/kubernetes/cfg/
# vim /opt/kubernetes/cfg/flanneld
# cat /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.2.100:2379,https://192.168.2.101:2379,https://192.168.2.102:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

2.3 systemd 管理Flannel:

vim /usr/lib/systemd/system/flanneld.service
cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

2.4 配置Docker启动指定子网段:可以将源文件直接覆盖掉

vim /usr/lib/systemd/system/docker.service
cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

2.5 配置node服务器期

从 master 节点拷⻉证书⽂件到 node1 和 node2 上:因为 node1 和 2 上没有证书,但是 flanel 需要证书

# mkdir -pv /opt/etcd/ssl/
# scp /opt/etcd/ssl/*  k8s-node1:/opt/etcd/ssl/
# scp /opt/etcd/ssl/*  k8s-node2:/opt/etcd/ssl/

 2.6 将node1配置的文件发送到node2上

(不嫌麻烦还是建议一个一个配,不然一个文件错了,另外一个文件也错,前提node2也有配置好二进制包)

scp /usr/lib/systemd/system/flanneld.service node2:/usr/lib/systemd/system/flanneld.service
scp /usr/lib/systemd/system/docker.service node2:/usr/lib/systemd/system/docker.service 

2.7 重启flannel和docker

[root@k8s-node1]# systemctl daemon-reload
[root@k8s-node1]# systemctl start flanneld
[root@k8s-node1]# systemctl enable flanneld
[root@k8s-node1]# systemctl restart docker

ip a
#检查是否生效
ps -ef | grep docker

#确保docker0与flannel.1在同一网段。
#测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:

 我的node1 docker节点172.17.10.1  我用node2去ping能通说明可以访问

如果能通说明 Flannel 部署成功。如果不通检查下⽇志: journalctl -u flannel 

 三、在Master节点部署组件

在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。

3.1生成证书:

3.1.1 创建CA证书

master 节点操作给 api-server 创建的证书。别的服务访问 api-server 的时候需要通过证书认证

[root@k8s-master1 ~]# mkdir -p /opt/crt/
[root@k8s-master1 ~]# cd /opt/crt/
[root@k8s-master ~]# cat /opt/crt/ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

[root@k8s-master ~]# cat /opt/crt/ca-csr.json 
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
#证书将生成的 CA 证书和私钥分别输出到 ca.pem 和 ca-key.pem 文件中
[root@k8s-master crt]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

3.1.2 生成api-server证书

[root@k8s-master ~]# cat /opt/crt/server-csr.json 
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.2.100",
      "192.168.2.101",
      "192.168.2.102",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
#ip地址需要注意
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

3.1.3 生成kube-proxy证书:

[root@k8s-master ~]# cat /opt/crt/kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

最终生成以下证书文件:

3.2 部署api-server组件 

---在master节点进⾏ 下载⼆进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md 下载这个包(kubernetes-server-linux-amd64.tar.gz)大概400M 包含了所需的所有组件。

# wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz
# mkdir /opt/kubernetes/{bin,cfg,ssl} -pv
#将证书放在kubernetes的证书文件中

# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/ku
 bernetes/bin

 3.2.1 创建token文件

[root@k8s-master1 crt]# cd /opt/kubernetes/cfg/
# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

第⼀列:随机字符串,⾃⼰可⽣成

第⼆列:⽤户名

第三列: UID

第四列:⽤户组

3.2.2 创建 apiserver 配置⽂件:

[root@k8s-master1 cfg]# pwd
/opt/kubernetes/cfg
[root@k8s-master1 cfg]# vim kube-apiserver
[root@k8s-master1 cfg]# cat kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.2.100:2379,https://192.168.2.101:2379,https://192.168.2.102:2379 \
--bind-address=192.168.2.100 \
--secure-port=6443 \
--advertise-address=192.168.2.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
 

配置好前面生成的证书,确保能连接etcd。

参数说明:

—logtostderr 启用日志
—-v 日志等级
—etcd-servers etcd集群地址
—bind-address 监听地址
—secure-port https安全端口
—advertise-address 集群通告地址
—allow-privileged 启用授权
—service-cluster-ip-range Service虚拟IP地址段
—enable-admission-plugins 准入控制模块
—authorization-mode 认证授权,启用RBAC授权和节点自管理
—enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
—token-auth-file token文件
—service-node-port-range Service Node类型默认分配端口范围

3.2.3 systemd管理apiserver:

[root@k8s-master1 cfg]# cd /usr/lib/systemd/system
# vim kube-apiserver.service
# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动:

# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl start kube-apiserver
# systemctl status kube-apiserver

3.3 部署schedule组件

3.3.1 创建schduler配置文件:

[root@k8s-master1 cfg]# vim  /opt/kubernetes/cfg/kube-scheduler
# cat /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--port=10251 \
--address=127.0.0.1"

参数说明: * --master 连接本地 apiserver * --leader-elect 当该组件启动多个时⾃动选举( HA )

3.3.2 systemd管理schduler组件:

[root@k8s-master1 cfg]# cd /usr/lib/systemd/system/
# vim kube-scheduler.service
# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

 启动:
 

#systemctl daemon-reload
# systemctl enable kube-scheduler 
# systemctl start kube-scheduler 
# systemctl status kube-scheduler

3.4 部署controller-manager组件--控制管理组件

master
节点操作:创建controller-manager配置文件

[root@k8s-master1 ~]# cd /opt/kubernetes/cfg/
[root@k8s-master1 cfg]# vim kube-controller-manager
# cat /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

 3.4.1 systemd管理controller-manager组件:

[root@k8s-master1 cfg]# cd /usr/lib/systemd/system/
 [root@k8s-master1 system]# vim kube-controller-manager.service
 # cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动:

# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager.service

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
/opt/kubernetes/bin/kubectl get cs 

 所有组件都已经启动成功,通过kubectl工具不能查看当前集群组件状态,将可执行文件路/k8s/kubernetes/ 添加到 PATH 变量中

PATH=/opt/kubernetes/bin:$PATH:$HOME/bin

3.5  在Node节点部署组件

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

认证大致工作流程如图所示:

 

3.5.1 将kubelet-bootstrap用户绑定到系统集群角色(master)

[root@k8s-master1 ~]kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

 3.5.2 创建 kubeconfig文件:

在⽣成kubernetes证书的⽬录下执⾏以下命令⽣成kubeconfig⽂件:
[root@k8s-master1 ~]# cd /opt/crt/
#指定apiserver 内⽹负载均衡地址
[root@k8s-master1 crt]# KUBE_APISERVER="https://192.168.2.100:6443"
#写你master的ip地址,集群中就写负载均衡的ip地址
[root@k8s-master1 crt]# BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc

#设置集群参数
[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-cluster kub
 ernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap\
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

# 设置上下⽂参数
[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

 #设置默认上下⽂
[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

# 创建kube-proxy kubeconfig文件
[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfi

[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

 [root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 crt]# ls *.kubeconfig
bootstrap.kubeconfig  kube-proxy.kubeconfig

# 前提(一定是生成了这两个文件并且确保自己的配置正确,不正确的就复制上面的命令给ai编辑顺序在运行,有可能顺序出了问题)

将这两个⽂件拷⻉到 Node 节点 /opt/kubernetes/cfg ⽬录下。

[root@k8s-master1 crt]# scp *.kubeconfig k8s-node1:/opt/kubernetes/cfg/
[root@k8s-master1 crt]# scp *.kubeconfig k8s-node2:/opt/kubernetes/cfg/

--------------------- 下⾯这些操作在 node 节点完成:----------------------------------------------------------

3.6部署 kubelet 组件(node操作)

3.6.1 部署kubelet组件

# 将前⾯master 上⾯的包拷⻉过去:下载的⼆进制包中的 kubelet 和 kube-proxy 拷⻉到 /opt/kubernetes/bin ⽬录下。 因为这个二进制包(400M的)包含了全部需要的文件好配置

[root@k8s-master1 ~]# scp kubernetes-server-linux-amd64.tar.gz k8s-node1:/root/
[root@k8s-master1 ~]# scp kubernetes-server-linux-amd64.tar.gz k8s-node2:/root/
[root@k8s-node1 ~]# tar xzf kubernetes-server-linux-amd64.tar.gz
[root@k8s-node1 ~]# cd kubernetes/server/bin/
[root@k8s-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/

3.6.2 在两个node节点创建kubelet配置文件

[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.2.101 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.2.102 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"


也可以通过scp /opt/kubernetes/cfg/kubelet node2:/opt/kubernetes/cfg/

#Pod镜像需要提前下载:

[root@k8s-node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
[root@k8s-node2 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

参数说明:

—hostname-override 在集群中显示的主机名
—kubeconfig 指定kubeconfig文件位置,会自动生成
—bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
—cert-dir 颁发证书存放位置
—pod-infra-container-image 管理Pod网络的镜像

#其中 /opt/kubernetes/cfg/kubelet.config 配置⽂件如下:

(address是配置这个node1的ip)

#node1
cat /opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.2.101
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

scp /opt/kubernetes/cfg/kubelet.config node2:/opt/kubernetes/cfg/
或者:
#node2
cat /opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.2.102
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

3.6.3 systemd管理kubelet组件:

cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

scp /usr/lib/systemd/system/kubelet.service node2:/usr/lib/systemd/system/

启动:

在node1/2运行
# systemctl daemon-reload
# systemctl enable kubelet
# systemctl start kubelet

在在Master审批Node加入集群:启动后还没加入到集群中,需要手动允许该节点才可以
在Master节点查看请求签名的Node:
[root@k8s-master ~]# /opt/kubernetes/bin/kubectl get csr

[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl certificate approve XXXXID

查看集群节点信息:
[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get nodes

 3.7 部署kube-proxy组件(还是在所有 node 节点)

[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.2.101 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

3.7.1 systemd 管理 kube-proxy 组件:

[root@k8s-node1 ~]# cd /usr/lib/systemd/system
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动:

 # systemctl daemon-reload
 # systemctl enable kube-proxy
 # systemctl start kube-proxy

3.7.2 在 master 查看集群状态

[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get node

NAME            STATUS    ROLES     AGE       VERSION
192.168.2.101   Ready     <none>    1d        v1.11.10
192.168.2.102   Ready     <none>    1d        v1.11.10

#查看集群状态
[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"} 

 运⾏⼀个测试示例-在master节点先安装docker服务

# /opt/kubernetes/bin/kubectl run nginx --image=daocloud.io/nginx --replicas=3
# /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
# /opt/kub.../bin/kubectl delete -f  deployment  --all

在master上⾯查看:查看Pod,Service:
#/opt/kubernetes/bin/kubectl get pods  #需要等⼀会

查看pod详细信息:
/opt/kubernetes/bin/kubectl describe pod nginx-6648ff9bb4-n772j
# /opt/kubernetes/bin/kubectl get svc

访问node ip 加端⼝
打开浏览器输⼊:http://192.168.2.101:39789

 

 

 

部署完成; 

四、 部署Dashboard(Web UI)

 * dashboard-deployment.yaml     # 部署 Pod ,提供 Web 服务

* dashboard-rbac.yaml                 # 授权访问 apiserver 获取信息

* dashboard-service.yaml            # 发布服务,提供对外访问

4.1 创建⼀个目录----dashboard-deployment.yaml

[root@k8s-master ~]# mkdir webui
[root@k8s-master ~]# cd webui/
[root@k8s-master webui]# touch dashboard-deployment.yaml
[root@k8s-master webui]# cat dashboard-deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.1
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
          protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

4.2 创建⼀个目录----dashboard-rbac.yaml

[root@k8s-master webui]# touch dashboard-rbac.yaml
[root@k8s-master webui]# cat dashboard-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

4.3 创建⼀个目录----dashboard-service.yaml

[root@k8s-master webui]# touch dashboard-service.yaml
[root@k8s-master webui]# cat dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboardrbac.yaml
[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboarddeployment.yaml
[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboardservice.yaml

等待数分钟,查看资源状态:

查看名称空间:

/opt/kubernetes/bin/kubectl get all -n kube-system

查看访问端⼝: 

/opt/kubernetes/bin/kubectl get svc -n kube-system

 访问node节点的ip  192.168.2.101:39741    node节点访问 管理容器

搭建完毕!

标签:opt,kubernetes,二进制,Centos7,--,etcd,k8s,root
From: https://blog.csdn.net/weixin_57949827/article/details/143104642

相关文章

  • CentOS7换yum源(出现yum下载不了东西)
    1.如果您的系统提示默认yum源无法使用,或者是安装yuminstall安装软件包失败,可以参考2.可以直接按顺序执行下面5句话,前提是您的服务器是默认使用的是官方yum。  验证:cd/etc/yum.repos.d,利用ls查看是否存在CentOS-Base.repo文件,存在即是的。 #切换到yum仓库配置目录......
  • k8s 中的 Gateway API 的背景和简介【k8s 系列之四】
    〇、GatewayAPI的背景第一阶段:Service初始的Kubernetes内部服务向外暴露,使用的是自身的LoadBlancer和NodePort类型的Service。在集群规模逐渐扩大的时候,这种Service管理的方式满足不了我们的需求。比如NodePort需要大量的端口难以维护,多了一层NAT,请求量大会对......
  • 在K8S中,有一家拼车公司希望通过同时扩展其平台来增加服务器数量,公司如何有效地实现这
    在Kubernetes(K8S)中,对于一家拼车公司希望通过同时扩展其平台来增加服务器数量以实现有效的资源分配,可以按照以下步骤进行:1.准备阶段评估需求:拼车公司需要明确其业务增长预期、用户访问量、峰值负载等关键指标,以确定所需的服务器数量和资源配置。规划架构:设计一个可扩展......
  • 在K8S中,有一种情况,公司希望向具有各种环境的客户提供所有必需的分发,他们如何以动态的
    在Kubernetes(K8S)中,公司若希望向具有各种环境的客户提供所有必需的分发,并以动态的方式实现这一关键目标,可以遵循以下步骤和策略:1.多环境部署策略创建不同的命名空间在Kubernetes中,命名空间是一种将集群内部资源(如Pod、Service等)分组的方式。公司可以为每个客户或环境创建一个......
  • Linux下安装Nginx,CentOS7安装Nginx
    首先,需要安装一些编译Nginx所需的依赖包,使用以下命令:yum-yinstallgccpcre-develzlib-developenssl-devel下载https://nginx.org/en/download.html上传到opt目录下或者直接在linux系统里下载wgethttp://nginx.org/download/nginx-1.26.2.tar.gz如果command......
  • 在K8S中,公司该如何处理服务器及其安装?
    在Kubernetes(K8S)环境中,公司处理服务器及其安装的过程需要细致规划和执行。以下是一个详细的步骤指南,帮助公司有效地处理服务器及其安装:1.服务器准备硬件选择与配置根据业务需求选择合适的服务器硬件,包括CPU、内存、存储等。确保服务器满足Kubernetes的最低硬件要求,例如64位......
  • 在K8S中,什么是 Google 容器引擎?
    在Kubernetes(K8S)生态中,Google容器引擎(GoogleKubernetesEngine,简称GKE)是一个重要的组成部分。以下是对Google容器引擎的详细介绍:1.定义与背景Google容器引擎是GoogleCloud提供的一种托管式的容器化应用程序部署和管理解决方案。它基于Kubernetes,这是一个开源的容器编排平台,......
  • k8s 部署 node exporter
    创建namespacenode-exporter-namespace.yamlapiVersion:v1kind:Namespacemetadata:name:ns-monitor拉取镜像quay.io/prometheus/node-exporter:v0.18.1nodeexporter的DaemonSetnode-exporter-daemonSet.yamlapiVersion:apps/v1kind:DaemonSetmetadata:na......
  • k8s 部署 tomcat
    创建namespacetomcat-namespace.yamlapiVersion:v1kind:Namespacemetadata:name:ns-tomcat创建Deploymenttomcat-deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:tomcat-deploymentnamespace:ns-tomcatspec:replicas:1selector:matchLa......
  • k8s部署nginx
    创建namespacenginx-namespace.yamlapiVersion:v1kind:Namespacemetadata:name:ns-nginx创建Deploymentnginx-deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentnamespace:ns-nginxspec:selector:matchLabels:......