首页 > 系统相关 >centos8上使用kubeasz3.0.0项目ansible自动的二进制部署k8s高可用集群

centos8上使用kubeasz3.0.0项目ansible自动的二进制部署k8s高可用集群

时间:2022-11-26 08:44:44浏览次数:54  
标签:10.0 kube kubeasz3.0 harbor com ansible 100 k8s root

一、ansible的kubeasz3.0.0部署多master高可用kubernetes集群环境

#二进制部署,ansible的kubeasz3.0.0部署多master高可用kubernetes集群环境

1.#主机名设置
类型 服务器IP 主机名 VIP
K8S-Master1 10.0.0.100 master1-100.tan.com 10.0.0.248
K8S-Master2 10.0.0.101 master2-101.tan.com 10.0.0.248
Haproxy1 10.0.0.102 ha1-102.tan.com
Haproxy2 10.0.0.103 ha2-103.tan.com
Harbor1 10.0.0.104 harbor1-104.tan.com
Harbor2 10.0.0.105 harbor2-105.tan.com
Node节点1 10.0.0.106 node1-106.tan.com
Node节点2 10.0.0.107 node2-107.tan.com
Node节点3 10.0.0.108 node3-108.tan.com
etcd节点1 10.0.0.109 etcd1-109.tan.com
etcd节点2 10.0.0.110 etcd2-110.tan.com
etcd节点3 10.0.0.111 etcd3-111.tan.com

2.#软件清单
查看后面的ezdown文件定义的配置。

3.#基础环境准备
http://releases.ubuntu.com/
系统主机名配置、IP配置、系统参数优化,以及依赖的负载均衡和Harbor部署

3.1#keepalived:
root@k8s-ha1:~# cat /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
 state MASTER
 interface eth0
 virtual_router_id 1
 priority 100
 advert_int 3
 unicast_src_ip 192.168.7.108
 unicast_peer {
 192.168.7.109
 }
 authentication {
 auth_type PASS
 auth_pass 123abc
 }
  virtual_ipaddress {
 10.0.0.248 dev eth0 label eth0:1
 }
}

3.2##haproxy:配置net.ipv4.ip_nonlocal_bind = 1,绑定没有监听的端口。

listen k8s_api_nodes_6443
 bind 10.0.0.248:6443
 mode tcp
 #balance leastconn
 server 10.0.0.100 10.0.0.101:6443 check inter 2000 fall 3 rise 5
 server 10.0.0.101 10.0.0.101:6443 check inter 2000 fall 3 rise 5

3.3#Harbor之https:
内部镜像将统⼀保存在内部Harbor服务器,不再通过互联⽹在线下载
root@k8s-harbor1:/usr/local/src/harbor# pwd
/usr/local/src/harbor
root@k8s-harbor1:/usr/local/src/harbor# mkdir certs/

# openssl genrsa -out /usr/local/src/harbor/certs/harbor-ca.key #⽣成私有key
# openssl req -x509 -new -nodes -key /usr/local/src/harbor/certs/harbor-ca.key -subj "/CN=harbor.tan.com" -days 7120 -out /usr/local/src/harbor/certs/harbor-ca.crt #签证

# vim harbor.cfg
hostname = harbor.tan.com
ui_url_protocol = https
ssl_cert = /usr/local/src/harbor/certs/harbor-ca.crt
ssl_cert_key = /usr/local/src/harbor/certs/harbor-ca.key
harbor_admin_password = 123456

# ./install.sh

#client 同步在crt证书:
[root@master1-100 ~]# mkdir /etc/docker/certs.d/harbor.tan.com -p
[root@master1-100 ~]# scp /usr/local/src/harbor/certs/harbor-ca.crt 10.0.0.104:/etc/docker/certs.d/harbor.tan.com
[root@master1-100 ~]# vim /etc/hosts #添加host⽂件解析
10.0.0.104 harbor.tan.com
[root@master1-100 ~]# systemctl restart docker #重启docker

#测试登录harbor:
[root@master1-100 ~]# docker login harbor.tan.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

#测试push镜像到harbor:
[root@master1-100 ~]# docker pull alpine
[root@master1-100 ~]#docker tag alpine harbor.tan.com/library/alpine:linux36
[root@master1-100 ~]#docker push harbor.tan.com/library/alpine:linux36
The push refers to repository [harbor.tan.com/library/alpine]
256a7af3acb1: Pushed
linux36: digest:
sha256:97a042bf09f1bf78c8cf3dcebef94614f2b95fa2f988a5c07314031bc2570c7a size: 528

4.#ansible部署
4.1#基础环境准备:
# ssh-keygen #⽣成密钥对
[root@master1-100 ~]#yum localinstall sshpass 
#ssh同步公钥到各k8s服务器,我是到阿里镜像站下载的sshpass.rpm包安装的。
#分发公钥脚本:
[root@master1-100 ~]#ssh-keygen
#后面一直回车即可。
#同步docker证书脚本:
#!/bin/bash
#⽬标主机列表
IP="
10.0.0.100
10.0.0.101
10.0.0.102
10.0.0.103
10.0.0.104
10.0.0.105
10.0.0.106
10.0.0.107
10.0.0.108
10.0.0.109
10.0.0.110
10.0.0.111
"
for node in ${IP};do
  sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
  ln -s  /usr/bin/python3 /usr/bin/python
  if [ $? -eq 0 ];then
    echo "${node} 秘钥copy完成"
    echo "${node} 秘钥copy完成,准备环境初始化....."
    ssh ${node} "mkdir /etc/docker/certs.d/harbor.tan.com -p"
    echo "Harbor 证书⽬录创建成功!
    scp /etc/docker/certs.d/harbor.tan.com/harbor-ca.crt ${node}:/etc/docker/certs.d/harbor.tan.com/harbor-ca.crt
    echo "Harbor 证书拷⻉成功!"
    scp /etc/hosts ${node}:/etc/hosts
    echo "host ⽂件拷⻉完成"
    scp -r /root/.docker ${node}:/root/
    echo "Harbor 认证⽂件拷⻉完成!"
    scp -r /etc/resolv.conf ${node}:/etc/
  else
    echo "${node} 秘钥copy失败"
  fi
done

#执⾏脚本同步:
[root@master1-100 ~]#bash scp.sh



4.2安装ansible
[root@master1-100 ~]#yum install git  -y
# pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速)
[root@master1-100 ~]#pip3 install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/
[root@master1-100 ~]#pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/

#https://github.com/easzlab/kubeasz/releases查看不同版本的特性
#更新支持 ansible 2.10.4
#本次ansible安装的是更高版本,后面会有两个地方需要修改。
[root@master1-100 ~]# ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller
starting with Ansible 2.12. Current version: 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC
8.3.1 20191121 (Red Hat 8.3.1-5)]. This feature will be removed from ansible-core in
version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False
in ansible.cfg.
/usr/local/lib/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
  from cryptography.exceptions import InvalidSignature
ansible [core 2.11.12]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
  jinja version = 3.0.3
  libyaml = True

4.3#下载项目源码、二进制及离线镜像
export release=3.0.0
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
# 使用工具脚本下载
./ezdown -D

4.4#创建集群配置实例
[root@master1-100 ~]# cd /etc/kubeasz/
[root@master1-100 kubeasz]# ./ezctl new k8s-01

4.5#修改'/etc/kubeasz/clusters/k8s-01/hosts' 和 '/etc/kubeasz/clusters/k8s-01/config.yml':根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改。

[root@master1-100 kubeasz]# cat /etc/kubeasz/clusters/k8s-01/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.0.0.109
10.0.0.110
10.0.0.111

# master node(s)
[kube_master]
10.0.0.100
10.0.0.101

# work node(s)
[kube_node]
10.0.0.106
10.0.0.107

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#10.0.0.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
10.0.0.103 LB_ROLE=backup EX_APISERVER_VIP=10.0.0.248 EX_APISERVER_PORT=8443
10.0.0.102 LB_ROLE=master EX_APISERVER_VIP=10.0.0.248 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
#10.0.0.1

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.10.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-32767"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"


[root@master1-100 kubeasz]# cat /etc/kubeasz/clusters/k8s-01/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

# [containerd]基础容器镜像
SANDBOX_IMAGE: "easzlab/pause-amd64:3.2"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "10.0.0.248"
  - "k8s.easzlab.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 110

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "yes"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.7.1"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.16.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "yes"
metricsVer: "v0.3.6"

# dashboard 自动安装
dashboard_install: "yes"
dashboardVer: "v2.1.0"
dashboardMetricsScraperVer: "v1.0.6"

# ingress 自动安装
ingress_install: "yes"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"

# prometheus 自动安装
prom_install: "yes"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v1.7.6"


4.6#修改如下配置
#将connection: local改为delegate_to: localhost。因为新版本ansible放弃了connection: local
[root@master1-100 ~]#grep -R "connection: local" roles/*  |awk -F: '{print $1}'
[root@master1-100 ~]#cat /etc/kubeasz/xiugai.sh
#!/bin/bash
#================================================================
#   Copyright (C) 2022 IEucd Inc. All rights reserved.
#
#   文件名称:xiugai.sh
#   创 建 者:TanLiang
#   创建日期:2022年11月25日
#   描    述:This is a test file
#
#================================================================
for con in `grep -R "connection: local" roles/*  |awk -F: '{print $1}'` ;do
sed  -i 's#connection: local#delegate_to: localhost#g'   $con
done

[root@master1-100 ~]# grep KUBE_APISERVER /etc/kubeasz/roles/* -R
#发现有两个文件定义的apiserver变量。
#修改以下文件的apiserver地址为vip的地址
[root@master1-100 ~]#vim /etc/kubeasz/roles/deploy/vars/main.yml
KUBE_APISERVER: "https://10.0.0.248:6443"
#修改以下文件的第二个判断条件的apiserver地址为vip的地址。若不修改,安装05的kube-node节点时,node节点的/etc/kubernetes/kubelet.kubeconfig配置的apiserver地址为https://127.0.0.1:6443。然后导致kubelet起不来,并且有后面flannel网络等一系列出现问题。
[root@master1-100 ~]#vim /etc/kubeasz/roles/kube-node/vars/main.yml
KUBE_APISERVER: "{%- if inventory_hostname in groups['kube_master'] -%} \
                     https://{{ inventory_hostname }}:6443 \
                 {%- else -%} \
                     {%- if groups['kube_master']|length > 1 -%} \
                         https://10.0.0.248:6443 \
                     {%- else -%} \
                         https://{{ groups['kube_master'][0] }}:6443 \
                     {%- endif -%} \
                 {%- endif -%}"


4.7 #开始安装 如果你对集群安装流程不熟悉,请阅读项目首页 安装步骤 讲解后分步安装,并对 每步都进行验证
# 一键安装
ezctl setup k8s-01 all

# 或者分步安装,具体使用 ezctl help setup 查看分步安装帮助信息
# ezctl setup k8s-01 01
# ezctl setup k8s-01 02
# ezctl setup k8s-01 03
# ezctl setup k8s-01 04
# ezctl setup k8s-01 05
# ezctl setup k8s-01 06
# ezctl setup k8s-01 07

4.8#dashboard操作,查看到dashboard部署在10.0.0.106上,并且端口是31719
[root@master1-100 ~]# kubectl get svc -n kube-system -o wide |grep dashboard
dashboard-metrics-scraper   ClusterIP   10.10.73.9      <none>        8000/TCP                     81m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.10.200.131   <none>        443:31719/TCP                81m   k8s-app=kubernetes-dashboard
[root@master1-100 ~]# kubectl get pod -A  -o wide |grep dashboard
kube-system   dashboard-metrics-scraper-79c5968bdc-sgd9m   1/1     Running   0          82m   10.20.0.2    10.0.0.107   <none>           <none>
kube-system   kubernetes-dashboard-c4c6566d6-htbpb         1/1     Running   2          82m   10.20.0.2    10.0.0.106   <none>           <none>

[root@master1-100 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-q9z5n
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: e8ed3c23-414d-4db7-ae6a-97e8995e50b7

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkdCVlRCdjNEa0V1c1NBbEFmejVRZmo3ei1WTE16YmxGd3ozcV9IZXlsdXcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXE5ejVuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlOGVkM2MyMy00MTRkLTRkYjctYWU2YS05N2U4OTk1ZTUwYjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.MFcviaSc1dEwW5Uz7V-tYEw8UGjOJKooJTWC-ExXkfRyP2hFCB7CE_eIzjj8q3UdH8JqH-PM2MXWF1ker1w1lYWjP54AIYqb5qUkknrdqjUaYNB51ya3FDyzaUrOpi7petgfVQ3PVHEaQ3x6pqcttPcr-F7LfrGpRRJdGHsJplrM4UO007jsLR_ccUw08vKy6rCAH0FACajyvZypWsG9QqBOI6nW2M6UQPqN2sccRP6y-ebHkI36Hdp_Rn7sSeuDdDzvLG7EDbgagUxm2ue4BX9jH4VSCnvEd_bT_qG9K72HdOYOm2phRkGXXkrl-JXzC-Lpk97flMYrJbAjxcujPQ
ca.crt:     1350 bytes
namespace:  11 bytes

#在宿主机的浏览器访问https://10.0.0.106:31719访问dashboard的UI界面。
#选择token,复制上面的token输入后登录。

4.9#验证pod状态
[root@master1-100 ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-5787695b7f-64crv                     1/1     Running   0          86m   10.20.0.4    10.0.0.106   <none>           <none>
kube-system   dashboard-metrics-scraper-79c5968bdc-sgd9m   1/1     Running   0          85m   10.20.0.2    10.0.0.107   <none>           <none>
kube-system   kube-flannel-ds-amd64-9q2cp                  1/1     Running   0          91m   10.0.0.101   10.0.0.101   <none>           <none>
kube-system   kube-flannel-ds-amd64-fchzz                  1/1     Running   12         91m   10.0.0.107   10.0.0.107   <none>           <none>
kube-system   kube-flannel-ds-amd64-tzxmz                  1/1     Running   0          91m   10.0.0.100   10.0.0.100   <none>           <none>
kube-system   kube-flannel-ds-amd64-wbqwg                  1/1     Running   12         91m   10.0.0.106   10.0.0.106   <none>           <none>
kube-system   kubernetes-dashboard-c4c6566d6-htbpb         1/1     Running   2          85m   10.20.0.2    10.0.0.106   <none>           <none>
kube-system   metrics-server-8568cf894b-psld8              1/1     Running   2          86m   10.20.0.3    10.0.0.107   <none>           <none>
kube-system   node-local-dns-9bz8q                         1/1     Running   0          86m   10.0.0.106   10.0.0.106   <none>           <none>
kube-system   node-local-dns-d7w5v                         1/1     Running   0          86m   10.0.0.100   10.0.0.100   <none>           <none>
kube-system   node-local-dns-lvr8k                         1/1     Running   0          86m   10.0.0.107   10.0.0.107   <none>           <none>
kube-system   node-local-dns-w5zz9                         1/1     Running   0          86m   10.0.0.101   10.0.0.101   <none>           <none>
kube-system   traefik-79f5f7879c-wp9g8                     1/1     Running   0          85m   10.20.0.3    10.0.0.106   <none>           <none>

4.10#添加节点
[root@master1-100 ~]#kubectl get node
NAME         STATUS                     ROLES    AGE     VERSION
10.0.0.100   Ready,SchedulingDisabled   master   3h18m   v1.20.2
10.0.0.101   Ready,SchedulingDisabled   master   3h18m   v1.20.2
10.0.0.106   Ready                      node     134m    v1.20.2
10.0.0.107   Ready                      node     134m    v1.20.2

[root@master1-100 ~]#ezctl add-node k8s-01 10.0.0.108

[root@master1-100 ~]#kubectl get node
NAME         STATUS                     ROLES    AGE     VERSION
10.0.0.100   Ready,SchedulingDisabled   master   3h23m   v1.20.2
10.0.0.101   Ready,SchedulingDisabled   master   3h23m   v1.20.2
10.0.0.106   Ready                      node     140m    v1.20.2
10.0.0.107   Ready                      node     139m    v1.20.2
10.0.0.108   Ready                      node     2m40s   v1.20.2

标签:10.0,kube,kubeasz3.0,harbor,com,ansible,100,k8s,root
From: https://www.cnblogs.com/tanll/p/16926869.html

相关文章

  • 每天一点基础K8S---kubeadm搭建master节点高可用K8S集群--version 1.25.3
    搭建条件centos-stream-8[root@localhost~]#cat/etc/os-releaseNAME="CentOSStream"|主机名|IP地址|role||master-worker-node-1|192.168.122.89/24|......
  • GitlabRunner+K8S 实现自动化发布
    前置条件:一台Linux服务器,安装好Docker一个K8s集群环境一个Gitlab仓库,可以自己搭建或者直接使用官方仓库(中文版gitlab:https://jihulab.com/)一个镜像仓库,用于存储doc......
  • 【k8s】deploy-minReadySeconds
    环境kubernetes1.20.4SpringBoot2.5.0-M3目标deploy在更新过程中,启动Pod后,minReadySeconds可以定义该Pod经过多少秒后才被视为可用。如果新的Pod不可用,是......
  • Kubernetes(K8S) yaml 介绍
    使用空格做为缩进缩进的空格数目不重要,只要相同层级的元素左侧对齐即可低版本缩进时不允许使用Tab键,只允许使用空格使用#标识注释,从这个字符一直到行尾,都会被解......
  • K8S环境的Jenkin性能问题处理续篇(任务Pod设置)f
    欢迎访问我的GitHub这里分类和汇总了欣宸的全部原创(含配套源码):https://github.com/zq2599/blog_demosK8S环境的Jenkin性能问题处理本文是《K8S环境的Jenkin性能问题......
  • Harbor用户机制、镜像同步和与K8s的集成实践
    Habor是由VMWare公司开源的容器镜像仓库。事实上,Habor是在DockerRegistry上进行了相应的企业级扩展,从而获得了更加广泛的应用,这些新的企业级特性包括:管理用户界面,基于角色......
  • K8S 集群架构
    K8S集群架构 master主节点,控制平台,不需要很高性能,不跑任务,通常一个就行了,也可以开多个主节点来提高集群可用度worker工作节点,可以是虚拟机或物理计算机,任务都在这......
  • 每天一点基础K8S--kubeadm构建多master k8s集群--version 1.20.6
    搭建条件centos-stream-8[root@localhost~]#cat/etc/os-releaseNAME="CentOSStream"|主机名|IP地址|role||master-worker-node-1|192.168.122.6/24|......
  • springboot 与 k8s结合使用
    https://juejin.cn/post/7138975184114941965https://techdozo.dev/deploying-a-restful-spring-boot-microservice-on-kubernetes/https://piotrminkowski.com/2017/05/......
  • Kubernetes(K8S) 常用命令
    Docker常用命令Docker常用命令#查看API版本[root@k8smaster~]#kubectlapi-versions#重启K8S[root@k8smaster~]#systemctlrestartkubelet#查看kubelet......