首页 > 其他分享 >k8s单节点和多节点部署

k8s单节点和多节点部署

时间:2022-11-12 18:36:44浏览次数:44  
标签:opt k8s kubernetes pem 部署 ca -- kube 节点

k8s单节点部署

参考文档

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131
https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational
https://github.com/etcd-io/etcd
https://shengbao.org/348.html
https://github.com/coreos/flannel
http://www.cnblogs.com/blogscc/p/10105134.html
https://blog.csdn.net/xiegh2014/article/details/84830880
https://blog.csdn.net/tiger435/article/details/85002337
https://www.cnblogs.com/wjoyxt/p/9968491.html
https://blog.csdn.net/zhaihaifei/article/details/79098564
http://blog.51cto.com/jerrymin/1898243
http://www.cnblogs.com/xuxinkun/p/5696031.html

1.环境规划

软件 版本
linux centos7.4
kubernetes 1.14
docker 18
etcd 3.3
角色 IP 组件
master 172.16.1.43 kube-apiserver kube-controller-manager kube-scheduler etcd
node1 172.16.1.44 kubelet kube-proxy docker flannel etcd
node2 172.16.1.45 kubelet kube-proxy docker flannel etcd

2.安装docker

在两个node节点上安装docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

添加国内的镜像源

vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://registry.docker-cn.com"]
}

3.自签TLS证书

在这里etcd和kubernetes使用的是同一个ca和server证书

可以分别创建,使用各自的证书

组件 使用的证书
etcd ca.pem server.pem server-key.pem
kube-apiserver ca.pem server.pem server-key.pem
kubelet ca.pem ca-key.pem
kube-proxy ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl ca.pem admin.pem admin-key.pem
flannel ca.pem server.pem server-key.pem

1) 安装证书生成工具cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2)创建目录/opt/tools/cfssl,所有的证书都在这里创建

mkdir /opt/tools/cfssl

3.1 创建etcd证书

1) ca配置

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

2)ca证书请求文件

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        }
    ]
}
EOF

3)生产ca证书和私钥,初始化ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

查看ca证书

[root@k8s-master cfssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3.2 创建server证书

1)server证书请求文件

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "172.16.1.43",
    "172.16.1.44",
    "172.16.1.45",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        }
    ]
}
EOF

2) 生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

查看server证书

[root@k8s-master cfssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

3.3 创建kube-proxy证书

1)kube-proxy证书请求文件

cat > kube-proxy-csr.json <<EOF
{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        }
    ]
}
EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建客户端证书

1)admin证书请求文件

cat > admin-csr.json <<EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

4.安装etcd

三台都需要安装,实现高可用

1)下载etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
tar xf etcd-v3.3.12-linux-amd64
cp etcd etcdctl /opt/kubernetes/bin/

2)将kubernetes的bin目录加入环境变量,方便以后的使用

3)编辑配置文件

vim /opt/kubernetes/cfg/etcd

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.1.43:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.43:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.43:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.43:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.1.43:2380,etcd02=https://172.16.1.44:2380,etcd03=https://172.16.1.45:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4)编辑启动脚本

vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
# set GOMAXPROCS to number of processors
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
#--peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\
#--client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" \
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5)启动etcd,第一个节点启动时,会卡住,因为连接不到其他的节点,按ctrl + c 退出即可,已经启动完成。

systemctl start etcd
systemctl enable etcd

6)检查etcd集群状态

/opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" cluster-health

member 2389474cc6fd9d08 is healthy: got healthy result from https://172.16.1.45:2379
member 5662fbe4b66bbe16 is healthy: got healthy result from https://172.16.1.44:2379
member 9f7ff9ac177a0ffb is healthy: got healthy result from https://172.16.1.43:2379
cluster is healthy

5.部署flannel网络

在两台node节点上部署flannel网络

默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 。

flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中

###5.1 etcd注册网段

1) 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

2)检查是否注册成功

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" get /coreos.com/network/config

5.2 flannel安装

1) 下载解压安装

https://github.com/coreos/flannel/releases
tar xf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld  mk-docker-opts.sh /opt/kubernetes/bin/

2) 编辑flannel配置文件

vim /opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

3)编辑flannel启动脚本

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

注意:

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入/run/flannel/subnet.env 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥; flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口; flanneld 运行时需要 root 权限;

4) 修改docker的启动脚本

配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target

5)启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

systemctl daemon-reload
systemctl stop docker
systemctl start flanneld
systemctl enable flanneld
systemctl start docker

6)验证服务

cat /run/flannel/subnet.env
ip a

##6.创建node节点kubeconfig文件

在生成证书的目录下进行

cd /opt/tools/cfssl

6.1 创建TLS Bootstrapping Token

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

6.2 创建kubelet kubeconfig

KUBE_APISERVER="https://172.16.1.43:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

6.3 创建kube-proxy kubeconfig

kubectl命令在kubernetes-node安装包中

kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

完成以后会创建两个配置文件bootstrap.kubeconfig和kube-proxy.kubeconfig,部署node节点时会用到这两个文件

7. 部署master

在master节点安装

kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master三节点高可用模式下可用

下载安装包,将二进制文件放到指定位置

https://github.com/kubernetes/kubernetes/releases

tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
mv kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/

7.1安装kube-apiserver

1) 创建apiserver配置文件

vim /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \
--insecure-bind-address=127.0.0.1 \
--bind-address=172.16.1.43 \
--insecure-port=8080 \
--secure-port=6443 \
--advertise-address=172.16.1.43 \
--allow-privileged=true \
--service-cluster-ip-range=10.10.10.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/server.pem \
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

2) 创建apiserver启动脚本

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3) 启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

[root@k8s-master cfg]# ss -tnlp|grep kube-apiserver
LISTEN     0      16384  172.16.1.43:6443                     *:*                   users:(("kube-apiserver",pid=5487,fd=5))
LISTEN     0      16384  127.0.0.1:8080                     *:*                   users:(("kube-apiserver",pid=5487,fd=3))

7.2 安装kube-scheduler

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

 –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;

–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;

–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

2)创建scheduler启动脚本

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3) 启动scheduler/opt/kubernetes/cfg/kube-controller-manager

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service

7.3 安装kube-controller-manager

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.10.10.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

2)创建启动脚本

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3 ) 启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

安装完成,检查master服务状态

[root@k8s-master cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   

8.部署node

在所有node节点上

kubernetes work 节点运行如下组件: docker kubelet kube-proxy flannel

下载安装包

tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet /opt/kubernetes/bin/

从master拷贝bootstrap.kubeconfig和kube-proxy.kubeconfig配置文件到所有node节点

从master拷贝相关的证书到所有node节点

8.1 安装kubelet

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)

1)创建bubelet配置文件

vim /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--address=172.16.1.44 \
--hostname-override=172.16.1.44 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--allow-privileged=true \
--cluster-dns=10.10.10.2 \
--cluster-domain=cluster.local \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

2)创建kubelet启动脚本

vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

3)将kubelet-bootstrap用户绑定到系统集群角色,否则,启动时会报错“kubelet-bootstrap用户没有权限创建证书”

需要在master节点执行,默认连接localhost:8080端口

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

4)启动bubelet

systemctl enable kubelet
systemctl start kubelet

5)master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表

[root@k8s-master cfssl]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms   2m   kubelet-bootstrap   Pending

接受node

kubectl certificate approve node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms

再查看CSR

[root@k8s-master cfssl]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms   2m   kubelet-bootstrap   Approved,Issued

查看集群node状态

[root@k8s-master cfssl]# kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
172.16.1.44   Ready    <none>   138m   v1.13.0

8.2 安装kube-proxy

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡

1)创建kube-proxy配置文件

vim /opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=172.16.1.44 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

2) 创建kube-proxy启动脚本

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3)启动buke-proxy

systemctl enable kube-proxy
systemctl start kube-proxy

加入集群后,会在cfg目录下生成配置文件kubelet的配置文件和证书

[root@k8s-node1 cfg]# ls /opt/kubernetes/cfg/kubelet.kubeconfig 
/opt/kubernetes/cfg/kubelet.kubeconfig

[root@k8s-node1 cfg]# ls /opt/kubernetes/ssl/kubelet*
/opt/kubernetes/ssl/kubelet-client-2019-03-30-11-49-33.pem  /opt/kubernetes/ssl/kubelet.crt
/opt/kubernetes/ssl/kubelet-client-current.pem              /opt/kubernetes/ssl/kubelet.key

注意期间要是kubelet,kube-proxy配置错误,比如监听IP或者hostname错误导致node not found,需要删除kubelet-client证书,重启kubelet服务,重启认证csr即可

9.kubectl管理工具

在客户端配置kubectl工具,进行集群管理

1) 将kubectl工具拷贝到客户端

scp kubectl 172.16.1.44:/usr/bin/

2) 将之前创建的admin证书和ca证书拷贝到客户端

scp admin*pem ca.pem 172.16.1.44:/root/kubernetes/

3)设置集群项中名为kubernetes的apiserver地址与根证书

kubectl config set-cluster kubernetes --server=https://172.16.1.43:6443 --certificate-authority=kubernetes/ca.pem

会生成一个配置文件/root/.kube/config

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

4)设置用户项中cluster-admin用户证书认证字段

kubectl config set-credentials cluster-admin --certificate-authority=kubernetes/ca.pem --client-key=kubernetes/admin-key.pem --client-certificate=kubernetes/admin.pem

将admin用户信息添加到了/root/.kube/config文件里

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

5)设置环境项中名为default的默认集群和用户

kubectl config set-context default --cluster=kubernetes --user=cluster-admin

将默认的上下文信息添加到配置文件里/root/.kube/config

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluster-admin
  name: default
current-context: ""
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

6)设置默认环境项为default

kubectl config use-context default

完整的客户端配置文件

.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluster-admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

7)测试,在客户端使用kubectl连接集群

[root@k8s-node1 ~]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
172.16.1.44   Ready    <none>   6h30m   v1.13.0
172.16.1.45   Ready    <none>   6h10m   v1.13.0

将客户端的证书和配置文件打包,放到其他客户端同样可以使用

10. 安装coreDNS

在安装kubelet时,指定了dns地址是10.10.10.2,但是我们还没有安装dns组件,所有现在创建的pod不能进行正常的域名解析,需要安装dns组件

kubenetes1.13开始默认使用coreDNS来代替kube-dns

1)生成coredns.yml文件

coredns.yaml的模板文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes __PILLAR__DNS__DOMAIN__ in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: k8s.gcr.io/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: __PILLAR__DNS__SERVER__
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

transforms2sed.sed的文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/transforms2sed.sed

s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g
s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/g
s/__PILLAR__CLUSTER_CIDR__/$SERVICE_CLUSTER_IP_RANGE/g
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

使用sed命令替换模板文件中的指定的字段, $DNS_SERVER_IP集群中指定的dns地址10.10.10.2,$DNS_DOMAIN集群中指定的根域名cluster.local,$SERVICE_CLUSTER_IP_RANGE 集群中指定IP段10.10.10.0/24

还需要在配置文件中添加paiserver的地址 endpoint http://172.16.1.43:8080,默认coreDNS会连接10.10.10.1:443,是访问不通的

apiVersion: v1 kind: ConfigMap ...... .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { endpoint http://172.16.1.43:8080 ...... }

生成新的配置文件

sed -f transforms2sed.sed coredns.yaml.base > coredns.yaml

完成的配置文件coredns.yaml

# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            endpoint http://172.16.1.43:8080   
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.10.10.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

官网的文档https://coredns.io/plugins/kubernetes/

resyncperiod|用于从kubernetes的api同步数据的时间间隔

endpoint | 指定kubernetes的api地址,coredns会自动对其执行健康检查并将请求代理到健康的节点上

tls| 用于指定连接远程kubernetes api的相关证书

pods| 指定POD-MODE,有以下三种:

  • disabled:默认
  • insecure:返回一个A记录对应的ip,但并不会检查这个ip对应的Pod当前是否存在。这个选项主要用于兼容kube-dns
  • verified:推荐的方式,返回A记录的同时会确保对应ip的pod存在。比insecure会消耗更多的内存。

upstream| 定义外部域名解析转发的地址,可以是一个ip地址,也可以是一个resolv.conf文件

ttl| 默认5s,最大3600s

errors| 错误会被记录到标准输出

health| 用于检测当前配置是否存活,默认监听http 8080端口,可配置

kubernetes|根据服务的IP响应DNS查询请求。

prometheus|可以通过http://localhost:9153/metrics获取prometheus格式的监控数据

proxy|本地无法解析后,向上级地址进行查询,默认使用宿主机的 /etc/resolv.conf 配置

cache| 用于在内存中缓存dns解析,单位为s

reload| 单位为s,如果配置文件发生变更,自动reload的间隔

.:53 {
    kubernetes wh01 {
        resyncperiod 10s
        endpoint https://10.1.61.175:6443
        tls admin.pem admin-key.pem ca.pem
        pods verified
        endpoint_pod_names
        upstream /etc/resolv.conf
    }
    health
    log /var/log/coredns.log
    prometheus :9153
    proxy . /etc/resolv.conf
    cache 30
    reload 10s
}

2)部署coreDNS

kubectl create -f coredns.yaml

3)查看状态

kubectl get all -o wide -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
pod/coredns-69b995478c-vs46g               1/1     Running   0          50m   172.17.16.2   172.16.1.44   <none>           <none>
pod/kubernetes-dashboard-9bb654ff4-4zmn8   1/1     Running   0          12h   172.17.66.5   172.16.1.45   <none>           <none>

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
service/kube-dns               ClusterIP   10.10.10.2     <none>        53/UDP,53/TCP,9153/TCP   50m     k8s-app=kube-dns
service/kubernetes-dashboard   NodePort    10.10.10.191   <none>        81:45236/TCP             4d17h   app=kubernetes-dashboard

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS             IMAGES                                                                                   SELECTOR
deployment.apps/coredns                1/1     1            1           50m     coredns                coredns/coredns:1.3.1                                                                    k8s-app=kube-dns
deployment.apps/kubernetes-dashboard   1/1     1            1           4d17h   kubernetes-dashboard   registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1   app=kubernetes-dashboard

NAME                                             DESIRED   CURRENT   READY   AGE     CONTAINERS             IMAGES                                                                                   SELECTOR
replicaset.apps/coredns-69b995478c               1         1         1       50m     coredns                coredns/coredns:1.3.1                                                                    k8s-app=kube-dns,pod-template-hash=69b995478c
replicaset.apps/kubernetes-dashboard-9bb654ff4   1         1         1       4d17h   kubernetes-dashboard   registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1   app=kubernetes-dashboard,pod-template-hash=9bb654ff4

4)测试

busybox中的nslookup命令有问题,测试成功,但是返回了错误的信息

使用带nslookup的alphine测试

kubectl run dig --rm -it --image=docker.io/azukiapp/dig /bin/sh
----------

/ # cat /etc/resolv.conf 
nameserver 10.10.10.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # dig kubernetes.default.svc.cluster.local

; <<>> DiG 9.10.3-P3 <<>> kubernetes.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13605
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 5	IN A	10.10.10.1

;; Query time: 1 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:18 UTC 2019
;; MSG SIZE  rcvd: 117

/ # dig nginx-service.default.svc.cluster.local

; <<>> DiG 9.10.3-P3 <<>> nginx-service.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24013
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx-service.default.svc.cluster.local. IN A

;; ANSWER SECTION:
nginx-service.default.svc.cluster.local. 5 IN A	10.10.10.176

;; Query time: 0 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:29 UTC 2019
;; MSG SIZE  rcvd: 123

/ # dig www.baidu.com

; <<>> DiG 9.10.3-P3 <<>> www.baidu.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28619
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 5, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.baidu.com.			IN	A

;; ANSWER SECTION:
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	119.75.217.26
www.a.shifen.com.	30	IN	A	119.75.217.109

;; AUTHORITY SECTION:
a.shifen.com.		30	IN	NS	ns4.a.shifen.com.
a.shifen.com.		30	IN	NS	ns5.a.shifen.com.
a.shifen.com.		30	IN	NS	ns3.a.shifen.com.
a.shifen.com.		30	IN	NS	ns2.a.shifen.com.
a.shifen.com.		30	IN	NS	ns1.a.shifen.com.

;; ADDITIONAL SECTION:
ns4.a.shifen.com.	30	IN	A	14.215.177.229

;; Query time: 3 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:41 UTC 2019
;; MSG SIZE  rcvd: 391

11. master节点高可用

11.1 添加master2节点

1) 在mater1节点上的server.pm证书文件中加入master2节点的ip 172.16.1.46地址,并重新生成server.pm证书

cd /opt/tools/cfssl

vim server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "10.10.10.1",
    "172.16.1.43",
    "172.16.1.44",
    "172.16.1.45",
    "172.16.1.46",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        }
    ]
}

2)重新生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

3)将server.pm和server-key.pm文件拷贝到master和node节点的ssl目录,重启相关服务

4)将master1上的/opt/kubernetes目录和apiserver、scheduler、controller-manager启动脚本拷贝到master2上

scp -r /opt/kubernetes 172.16.1.46:/opt/
scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service 172.16.1.46:/usr/lib/systemd/system/
scp /etc/profile.d/kubernetes.sh 172.16.1.46:/etc/profile.d/

5) 在master2上修改kube-apiserver的配置文件,将监听的IP改为master2的

cat kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \
--insecure-bind-address=0.0.0.0 \
--bind-address=172.16.1.46 \
--insecure-port=8080 \
--secure-port=6443 \
--advertise-address=172.16.1.46 \
--allow-privileged=true \
--service-cluster-ip-range=10.10.10.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/server.pem \
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

6)在master2上启动服务

systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager

查看日志,有无报错

7)在master2上测试一下

kubectl get node 
NAME          STATUS   ROLES    AGE   VERSION
172.16.1.44   Ready    <none>   10d   v1.13.0
172.16.1.45   Ready    <none>   10d   v1.13.0

11.2 在node节点安装nginx代理

在所有node节点安装nginx,监听本机的6443端口,转发到后端的apiserver的6443端口,将node节点的kubelet连接的apiserver地址改为127.0.0.1:6443,实现master节点高可用

1)安装nginx

yum install nginx -y

2)修改nginx配置文件

vim nginx.conf

user nginx nginx;
worker_processes 8;
 
pid /usr/local/nginx/logs/nginx.pid;

worker_rlimit_nofile 51200;
events
{
    use epoll;
    worker_connections 65535;
}
 
stream {
	upstream k8s-apiserver {
		server 172.16.1.43:6443;
		server 172.16.1.46:6443;
	}
	server {
		listen 127.0.0.1:6443;
		proxy_pass k8s-apiserver;
	}

}

3)启动nginx

/etc/init.d/nginx start
chkconfig nginx on

4)修改node节点所有组件中apiserver地址

cd /opt/kubernetes/cfg

ls *config |xargs -i sed -i 's/172.16.1.43/127.0.0.1/' {}

5)重启kubelet和kube-proxy

systemctl restart kubelet
systemctl restart kube-proxy

查看日志,有无报错

6)在master节点查看,node节点的状态

kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
172.16.1.44   Ready    <none>   10d   v1.13.0
172.16.1.45   Ready    <none>   10d   v1.13.0

正常,到此,master的高可用已经完成

标签:opt,k8s,kubernetes,pem,部署,ca,--,kube,节点
From: https://blog.51cto.com/u_12227788/5841879

相关文章

  • 【云原生】Minio on k8s 讲解与实战操作
    目录一、概述二、开始编排部署1)下载chart包2)构建镜像3)修改yaml编排4)开始部署5)测试验证6)卸载一、概述MinIO是在GNUAffero通用公共许可证v3.0下发布的高性能对象存......
  • LINUX CENTOS7 部署步骤 nginx
    0.检查nginx是否安装rpm-qa|grepnginx1.检查yumlistyumlist|grepnginx2. 安装nginxyum-yinstallnginx3.验证是否安装完成nginx-v4.......
  • 用kubeadm安装k8s(使用containerd作为runc)
    1、从github下载下来containerdhttps://github.com/containerd/containerd/releases/tag/v1.6.8 2、解压并将文件直接复制进去/usr/local/bin/tarxvfcontainerd-1.......
  • 使用Argocd部署应用
    部署helloworld[root@master07-argocd-basics]#kubectlapply-f01-application-helloworld.yamlapplication.argoproj.io/spring-boot-helloworldcreated[root@ma......
  • 5.前后端不分离项目的部署
       本次将一个前后端不分离的项目部署到web上,采用如上的架构,并使用supervisor进行进程的管理项目访问路径:https://www.kunmzhao.cn/login/ 项目源码下载网盘地......
  • opensd开源啦 !这套自动化部署OpenStack工具你值得拥有
    2022年8月,经openEuler开源社区技术委员会审议通过,联通数科正式将opensd开源至openEuler开源社区。opensd是联通数科为解决OpenStack企业级部署的复杂性,针对自身OpenStack产......
  • xy2.0 部署
    一、nginx1、nginx_dockerfileFROMnginx:1.14COPYnginx/conf.d/etc/nginx/conf.dWORKDIR/etc/nginxRUNmkdirhtmllog2、build镜像:xy-nginx:2.0dockerbuild......
  • Nginx分发器部署keepalived
    安装keepalvied[root@nginx01~]#dnfinstallkeepalived-ykeepalived配置文件说明[root@nginx01~]#vim/etc/keepalived/keepalived.conf!ConfigurationFilefo......
  • 腾讯云服务器部署redis
    一、下载安装redis1、使用wgethttp://download.redis.io/releases/redis-5.0.5.tar.gz下载redis2、tar-zxvfredis-5.0.5.tar.gz解压安装包3、进入解压后的文件目录......
  • hadoop单个数据节点的不同存储路径的存储策略源码分析。
    产生问题于数据集群的数节点存储磁盘大小不同,造成使用一段时间以后容量小的磁盘空间紧张。其实,早期配置了磁盘使用存储策略,就能解决该问题,部分网来上说这个策略无效,再hadoop......