首页 > 其他分享 >k8s01 - 使用 kubeadm部署Kubernetes 1.26

k8s01 - 使用 kubeadm部署Kubernetes 1.26

时间:2023-01-29 19:56:14浏览次数:58  
标签:Kubernetes kubernetes k8s01 kubelet io containerd kubeadm kube config

目录

kubeadm是Kubernetes官方提供的用于快速安部署Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

1 准备

1.1 系统配置

在安装之前,需要先做好如下准备。3台CentOS 7.8主机如下:

  • 系统环境:CentOS Linux release 7.8.2003 (Core)
  • k8s 版本: 1.26
IP 主机名 规划角色
10.0.4.21 vm21 master
10.0.4.22 vm22 worker
10.0.4.23 vm23 worker

在各个主机上完成下面的系统配置。

如果各个主机启用了防火墙策略,需要开放Kubernetes各个组件所需要的端口,可以查看Ports and Protocols中的内容, 开放相关端口或者关闭主机的防火墙。

  • 禁用SELINUX:
setenforce 0

vi /etc/selinux/config
SELINUX=disabled
  • 创建/etc/modules-load.d/containerd.conf配置文件:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

执行以下命令使配置生效:

modprobe overlay
modprobe br_netfilter
  • 创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件:
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

执行以下命令使配置生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

1.2 配置服务器支持开启ipvs的前提条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在各个服务器节点上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

赋权:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。
使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm

如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3 部署容器运行时Containerd

在各个服务器节点上安装容器运行时Containerd。
下载Containerd的二进制包:

可先在网络可达的机器上下载好,再上传到服务器

wget https://github.com/containerd/containerd/releases/download/v1.6.14/cri-containerd-cni-1.6.14-linux-amd64.tar.gz

cri-containerd-cni-1.6.14-linux-amd64.tar.gz压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。 将解压缩到系统的根目录/中:

tar -zxvf cri-containerd-cni-1.6.14-linux-amd64.tar.gz -C /

etc/
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
etc/crictl.yaml
usr/
usr/local/
usr/local/sbin/
usr/local/sbin/runc
usr/local/bin/
usr/local/bin/containerd-stress
usr/local/bin/containerd-shim
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/crictl
usr/local/bin/critest
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/ctd-decoder
usr/local/bin/containerd
usr/local/bin/ctr
opt/
opt/cni/
opt/cni/bin/
opt/cni/bin/ptp
opt/cni/bin/bandwidth
opt/cni/bin/static
opt/cni/bin/dhcp
...
opt/containerd/
opt/containerd/cluster/
...

注意经测试cri-containerd-cni-1.6.4-linux-amd64.tar.gz包中包含的runc在CentOS 7下的动态链接有问题,这里从runc的github上单独下载runc,并替换上面安装的containerd中的runc:

wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64

接下来生成containerd的配置文件:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

根据文档 Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。

修改前面生成的配置文件/etc/containerd/config.toml:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

再修改/etc/containerd/config.toml中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

配置containerd开机启动,并启动containerd

systemctl enable containerd --now

使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:

crictl version

Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.6.14
RuntimeApiVersion:  v1

2.使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet,创建yum源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
yum install -y kubelet kubeadm kubectl

运行kubelet --help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,官方推荐我们使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file

最初Kubernetes这么做是为了支持动态Kubelet配置(Dynamic Kubelet Configuration),但动态Kubelet配置特性从k8s 1.22中已弃用,并在1.24中被移除。如果需要调整集群汇总所有节点kubelet的配置,还是推荐使用ansible等工具将配置分发到各个节点.

kubelet的配置文件必须是json或yaml格式,具体可查看这里

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

swapoff -a

修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。

swappiness参数调整,修改/etc/sysctl.d/99-kubernetes-cri.conf添加下面一行:

vm.swappiness=0

执行sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf使修改生效。

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml,特别注意修改advertiseAddress为你的master节点主机地址,这里用的是vm21,即第一台master的ip

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.4.21
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubeletcgroupDriversystemd,设置kube-proxy代理模式为ipvs

在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。

kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

接下来使用kubeadm初始化集群,选择vm21作为Master Node,在vm21上执行下面的命令:

[root@vm21 opt]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3
[root@vm21 opt]# kubeadm init --config kubeadm.yaml 
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm21] and IPs [10.96.0.1 10.0.4.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm21] and IPs [10.0.4.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm21] and IPs [10.0.4.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.504506 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm21 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vm21 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: 957r3e.sanmpgyjhozmdv9p
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.4.21:6443 --token 957r3e.sanmpgyjhozmdv9p \
        --discovery-token-ca-cert-hash sha256:6f7c594910cf33d849e0f2d48fb6529ef451e6840bf144d830848304a18bbfc0

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [certs]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"

  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiservercontroller-managerscheduler的静态pod

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • [addons]安装基本插件:CoreDNS, kube-proxy

  • 下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将另外2个节点加入集群的命令:
kubeadm join 10.0.4.21:6443 --token 957r3e.sanmpgyjhozmdv9p \
        --discovery-token-ca-cert-hash sha256:6f7c594910cf33d849e0f2d48fb6529ef451e6840bf144d830848304a18bbfc0

查看一下集群状态,确认个组件都处于healthy状态

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""} 

集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理。

2.3 安装包管理器helm 3

Helm是Kubernetes的包管理器,后续流程也将使用Helm安装Kubernetes的常用组件。 这里先在master节点node1上安装helm。

wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
tar -zxvf helm-v3.10.3-linux-amd64.tar.gz
mv linux-amd64/helm  /usr/local/bin/

执行helm list确认没有错误输出。

helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

2.4 部署Pod Network组件Calico

选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。

下载tigera-operator的helm chart:

wget https://github.com/projectcalico/calico/releases/download/v3.24.5/tigera-operator-v3.24.5.tgz

查看这个chart的中可定制的配置:

helm show values tigera-operator-v3.24.5.tgz

imagePullSecrets: {}

installation:
  enabled: true
  kubernetesProvider: ""

apiServer:
  enabled: true

certs:
  node:
    key:
    cert:
    commonName:
  typha:
    key:
    cert:
    commonName:
    caBundle:

# Resource requests and limits for the tigera/operator pod.
resources: {}

# Tolerations for the tigera/operator pod.
tolerations:
- effect: NoExecute
  operator: Exists
- effect: NoSchedule
  operator: Exists

# NodeSelector for the tigera/operator pod.
nodeSelector:
  kubernetes.io/os: linux

# Custom annotations for the tigera/operator pod.
podAnnotations: {}

# Custom labels for the tigera/operator pod.
podLabels: {}

# Image and registry configuration for the tigera/operator pod.
tigeraOperator:
  image: tigera/operator
  version: v1.28.5
  registry: quay.io
calicoctl:
  image: docker.io/calico/ctl
  tag: v3.24.5

定制的values.yaml如下:

# 可针对上面的配置进行定制,例如calico的镜像改成从私有库拉取。
# 这里只是个人本地环境测试k8s新版本,这里只有下面几行配置
apiServer:
  enabled: false

使用helm安装calico

helm install calico tigera-operator-v3.24.5.tgz -n kube-system  --create-namespace -f values.yaml

等待并确认所有pod处于Running状态:

kubectl get pod -n kube-system |grep tigera
tigera-operator-7795f5d79b-cflnb   1/1     Running   0          22h
 kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-67df98bdc8-rwlq6   1/1     Running   0          22h
calico-node-5pkkn                          1/1     Running   0          22h
calico-node-wtxpk                          1/1     Running   0          22h
calico-node-xgj8t                          1/1     Running   0          22h
calico-typha-5bf9c7b58-2w6gc               1/1     Running   0          22h
calico-typha-5bf9c7b58-jx575               1/1     Running   0          22h

查看一下calico向k8s中添加的api资源:

kubectl api-resources |grep calico
bgpconfigurations                                                                 crd.projectcalico.org/v1               false        BGPConfiguration
bgppeers                                                                          crd.projectcalico.org/v1               false        BGPPeer
blockaffinities                                                                   crd.projectcalico.org/v1               false        BlockAffinity
caliconodestatuses                                                                crd.projectcalico.org/v1               false        CalicoNodeStatus
clusterinformations                                                               crd.projectcalico.org/v1               false        ClusterInformation
felixconfigurations                                                               crd.projectcalico.org/v1               false        FelixConfiguration
globalnetworkpolicies                                                             crd.projectcalico.org/v1               false        GlobalNetworkPolicy
globalnetworksets                                                                 crd.projectcalico.org/v1               false        GlobalNetworkSet
hostendpoints                                                                     crd.projectcalico.org/v1               false        HostEndpoint
ipamblocks                                                                        crd.projectcalico.org/v1               false        IPAMBlock
ipamconfigs                                                                       crd.projectcalico.org/v1               false        IPAMConfig
ipamhandles                                                                       crd.projectcalico.org/v1               false        IPAMHandle
ippools                                                                           crd.projectcalico.org/v1               false        IPPool
ipreservations                                                                    crd.projectcalico.org/v1               false        IPReservation
kubecontrollersconfigurations                                                     crd.projectcalico.org/v1               false        KubeControllersConfiguration
networkpolicies                                                                   crd.projectcalico.org/v1               true         NetworkPolicy
networksets                                                                       crd.projectcalico.org/v1               true         NetworkSet

这些api资源是属于calico的,因此不建议使用kubectl来管理,推荐按照calicoctl来管理这些api资源。 将calicoctl安装为kubectl的插件:

cd /usr/local/bin
curl -o kubectl-calico -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64" 
chmod +x kubectl-calico

验证插件正常工作:

kubectl calico -h

2.5 验证k8s DNS是否可用

首次验证:

kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$

后续进入相同的容器可继续执行命令

kubectl exec -it curl -- /bin/sh

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.6 向Kubernetes集群中添加Node节点

将vm22, vm23添加到Kubernetes集群中,分别在vm22,vm23上执行:

kubeadm join 10.0.4.21:6443 --token 957r3e.sanmpgyjhozmdv9p \
        --discovery-token-ca-cert-hash sha256:6f7c594910cf33d849e0f2d48fb6529ef451e6840bf144d830848304a18bbfc0

成功加入节点后,在master节点可查看集群中当前的节点:

kubectl get nodes
NAME   STATUS   ROLES                AGE   VERSION
vm21   Ready    control-plane,edge   23h   v1.26.1
vm22   Ready    <none>               23h   v1.26.1
vm23   Ready    <none>               23h   v1.26.1

3.Kubernetes常用组件部署

3.1 使用Helm部署ingress-nginx

为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上。

这里将vm21(10.0.4.21)作为边缘节点,打上Label:

kubectl label node vm21 node-role.kubernetes.io/edge=

下载ingress-nginx的helm chart:

wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.4.2/ingress-nginx-4.4.2.tgz

查看ingress-nginx-4.4.2.tgz这个chart的可定制配置:

helm show values ingress-nginx-4.4.2.tgz

对values.yaml配置定制如下:

controller:
  ingressClassResource:
    name: nginx
    enabled: true
    default: true
    controllerValue: "k8s.io/ingress-nginx"
  admissionWebhooks:
    enabled: false
  replicaCount: 1
  image:
    # registry: registry.k8s.io
    # image: ingress-nginx/controller
    # tag: "v1.5.1"
    registry: docker.io
    image: unreachableg/registry.k8s.io_ingress-nginx_controller
    tag: "v1.5.1"
    digest: sha256:97fa1ff828554ff4ee1b0416e54ae2238b27d1faa6d314d5a94a92f1f99cf767
  hostNetwork: true
  nodeSelector:
    node-role.kubernetes.io/edge: ''
  affinity:
    podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - nginx-ingress
            - key: component
              operator: In
              values:
              - controller
          topologyKey: kubernetes.io/hostname
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: PreferNoSchedule

nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。 因为k8s.gcr.io被墙,这里替换成unreachableg/registry.k8s.io_ingress-nginx_controller提前拉取一下镜像:

crictl pull unreachableg/registry.k8s.io_ingress-nginx_controller:v1.5.1

部署:

helm install ingress-nginx ingress-nginx-4.4.2.tgz --create-namespace -n ingress-nginx -f values.yaml

查看部署结果

kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-7c96f857f-szcct   1/1     Running   0          22h

测试访问http://10.0.4.21返回默认的nginx 404页,则部署完成。

3.2 使用Helm部署 dashboard

K8s本身就具备一些基本的服务器监控工具,例如:

  • K8s Dashboard:插件工具,展示每个 K8s 集群上的资源利用情况,也是实现资源和环境管理与交互的主要工具。
  • Pod liveness probe:Container健康状态诊断工具。
  • Kubelet:每个 Node 上都运行著 Kubelet,监控Container的运行情况。 Kubelet 也是 Control Plane 与各个 Node 通信的渠道。

kubelet default 监听的port是10250,所以可以在Control Plane或Node上直接访问 curl https://127.0.0.1:10250/metrics/cadvisor -k

  • 需使用 https
  • metrics/cAdvisor 是 kubelet Pod 相关的监控指标,它还有一个 metrics,是 kubelet 自身的监控指标
  • -k 表示不验证 kubelet 证书,因整个K8s集群都是使用自签署证书,因此没必要验证

先部署metrics-server:

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/metrics-server-helm-chart-3.8.3/components.yaml

修改components.yaml中的image为docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.6.2。
修改components.yaml中容器的启动参数,加入--kubelet-insecure-tls。

spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls
        - --metric-resolution=15s
        #image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
        image: docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.6.2

检查 metrics-server 的运行状态:

kubectl get deploy -n kube-system
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
coredns           2/2     2            2           23h
metrics-server    1/1     1            1           42s
tigera-operator   1/1     1            1           23h

metrics-server的pod正常启动后,等一段时间就可以使用kubectl top查看集群和pod的metrics信息:

kubectl top node
NAME   CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
vm21   372m         9%     2198Mi          59%       
vm22   174m         4%     1936Mi          52%       
vm23   150m         3%     1967Mi          53%       

kubectl top pod -n kube-system
NAME                               CPU(cores)   MEMORY(bytes)   
coredns-5bbd96d687-4fgcp           3m           20Mi            
coredns-5bbd96d687-l4bd7           3m           21Mi            
etcd-vm21                          54m          154Mi           
kube-apiserver-vm21                99m          463Mi           
kube-controller-manager-vm21       35m          76Mi            
kube-proxy-6x5cx                   9m           18Mi            
kube-proxy-jz7ls                   1m           25Mi            
kube-proxy-vxf64                   9m           21Mi            
kube-scheduler-vm21                7m           33Mi            
metrics-server-6f67c7d9b4-b99gm    6m           17Mi            
tigera-operator-7795f5d79b-cflnb   5m           44Mi            

接下来使用helm部署k8s的dashboard,添加chart repo:

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
"kubernetes-dashboard" has been added to your repositories

helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubernetes-dashboard" chart repository
Update Complete. ⎈Happy Helming!⎈

查看chart的可定制配置:

helm show values kubernetes-dashboard/kubernetes-dashboard
  • 开启HTTPS访问
    dashboard将通过ingress以域名k8s.example.com暴露出来, 并为此域名开启HTTPS。
    为了开启HTTPS,需要为此域名申请SSL证书或使用自签证书,这里使用的证书和私钥文件分别为scert.pem和skey.pem。

证书和私钥文件需要事先生成,这里是测试环境,使用openssl 生成本地证书即可,shell 脚本如下:

服务器上需要先安装openssl, yum install -y openssl openssl-devel

#!/bin/sh

country="CN"
state="SZ"
city="NS"
org="SRE"
unit="MONITOR"
commonname="k8s.init.com"
email="[email protected]"

openssl req -new -x509 -days 3650 -nodes -out scert.pem -keyout skey.pem<<EOF
$country
$state
$city
$org
$unit
$commonname
$email
EOF

在当前目录下生成了两个文件:

ll *.pem
-rw-r--r--. 1 root root 1379 Jan 29 05:41 scert.pem
-rw-r--r--. 1 root root 1704 Jan 29 05:41 skey.pem

创建存放k8s.example.comssl证书的secret:

kubectl create secret tls init-com-tls-secret --cert=scert.pem --key=skey.pem -n kube-system

secret/init-com-tls-secret created

对values.yaml定制配置如下:

image:
  repository: kubernetesui/dashboard
  tag: v2.7.0
ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  hosts:
  - k8s.init.com
  tls:
    - secretName: init-com-tls-secret
      hosts:
      - k8s.init.com
metricsScraper:
  enabled: true

使用helm部署dashboard:

helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n kube-system -f dashboard-value.yaml

# 以下为输出结果
NAME: kubernetes-dashboard
LAST DEPLOYED: Sun Jan 29 04:45:40 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
From outside the cluster, the server URL(s) are:
     https://k8s.init.com

使用helm部署dashboard:

helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
-n kube-system \
-f values.yaml

确认上面的命令部署成功。

创建管理员sa:

kubectl create serviceaccount kube-dashboard-admin-sa -n kube-system

kubectl create clusterrolebinding kube-dashboard-admin-sa \
--clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa

创建集群管理员登录dashboard所需token:

kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600h

eyJhbGciOiJSUzI1NiIsImtpZCI6Il9zMmg4bHZRSXBWSWFkcWhQcDM1WnJadlF1NHNEblBXaWZ5b2hFcmtnRU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxOTkwMzQ2MTAxLCJpYXQiOjE2NzQ5ODYxMDEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6ImE3ZjZiOGM1LWQyYjUtNGU4ZS1iNGEzLTcwMWVkZWNiNGNkZSJ9fSwibmJmIjoxNjc0OTg2MTAxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.xnLhaAkfUXPJfbgThsqK3ToEJstYCRh756aDJN9s_DI4ao4rbwffHUW9Tv_5eEHIxLTyZc40ctsNek-hR7ey_MCUyhClJd1x8WbGlyKXyOcUMXRq3VFQVa3HJ_ria0tX-S6UWtR8xmY1h5QuxyYFVWRhevHdAv4SSPYBxzvM6uwhS1xqPzEqclxDfrWXkkQ_FcRHgJLoAipLHJSyGkmOsdwWh3Ih0wdaGXgeAu5eFBLnwvDZYKJE-WLIFH0mS0P3Tz9i6-XNu05xIq9kba6aPw-xR-D1fh8McSi13BpuQtn2m8e0rRLDIqw0JfWLu7EuSZhAuHLpBTkN0RN-Yfo4pg

使用上面的token登录k8s dashboard。

注意在访问dashboard上添加本地host,如 10.0.4.21 k8s.init.com

参考

https://blog.frognew.com/2023/01/kubeadm-install-kubernetes-1.26.html

标签:Kubernetes,kubernetes,k8s01,kubelet,io,containerd,kubeadm,kube,config
From: https://www.cnblogs.com/unchch/p/17071166.html

相关文章

  • Kubernetes监控手册06-监控APIServer
    写在前面如果是用的公有云托管的Kubernetes集群,控制面的组件都交由云厂商托管的,那作为客户的我们就省事了,基本不用操心APIServer的运维。个人也推荐使用云厂商这个服......
  • 实现kubernetes基于ceph块存储和cephfs的数据持久化
      ceph对接k8s使用案例  k8s节点安装ceph-common  分别在k8smaster与各node节点安装ceph-common组件包。  下载ceph仓库key文件root@master1:~/yam......
  • kubernetes的Kubelet
    1.kubelet简介在kubernetes集群中,每个Node节点都会启动kubelet进程,用来处理Master节点下发到本节点的任务,管理Pod和其中的容器。kubelet会在APIServer上注册节点信息,定期......
  • 二进制部署Kubernetes 1.23.15版本高可用集群实战
    目录前置知识:部署Kubernetes集群的方式一.K8S二进制部署准备环境1.所有节点安装常用的软件包2.免密钥登录集群并配置同步脚本3.Linux基础环境优化4.所有节点升级Linux内......
  • kubernetes(三)
    一、实现基于velero对etcd的单独namespace的备份和恢复Velero简介Velero是VMware开源的云原生的灾难恢复和迁移工具,本身是开源的,采用Go语言编写,开源安全的备份、恢复和......
  • 如何在 Kubernetes 部署 PostgreSQL
    文章目录​​1.简介​​​​2.条件​​​​3.helm部署posgresql​​​​3.1添加Helm存储库​​​​3.2默认安装​​​​3.3选参安装​​​​3.4持久存储安装​​......
  • 【云原生kubernetes】k8s中pod使用详解
    一、前言在之前k8s组件一篇中,我们谈到了pod这个组件,了解到pod是k8s中资源管理的最小单位,可以说Pod是整个k8s对外提供服务的最基础的个体,有必要对Pod做深入的学习和探究。二......
  • kubernetes(二)
    一、kubernetes高可用集群二进制部署(一)部署k8s高可用集群参考:https://www.kubernetes.org.cn/kubernetes%E8%AE%BE%E8%AE%A1%E6%9E%B6%E6%9E%84https://github.com/eas......
  • Kubernetes 基本概念
    ContainerContainer(容器)是一种便携式、轻量级的操作系统级虚拟化技术。它使用namespace隔离不同的软件运行环境,并通过镜像自包含软件的运行环境,从而使得容器可以很方便的......
  • Kubernetes 部署MinIO
    2.2.4Kubernetes部署MinIOKubernetes的部署和状态集提供了在独立,分布式或共享模式下部署MinIO服务器的完美平台。在Kubernetes上部署MinIO有多种选择,您可以选择最适合......