首页 > 系统相关 >Ubuntu 22.04 部署 Kubernetes v1.30

Ubuntu 22.04 部署 Kubernetes v1.30

时间:2024-12-15 21:02:00浏览次数:6  
标签:168455 12 07 Kubernetes I1215 kubelet 22.04 v1.30 go

1 Shell工具

xshell 免费版,工具菜单下可选,同时控制所有的会话,后面不同节点安装Kubernetes会提高效率

2 节点规划

Linux为Ubuntu Server 22.04,下载地址 https://ubuntu.com/download/server

域名

IP

资源

节点名

k8s-master

192.168.0.150

8C16G

k8s-master

k8s-node1

192.168.0.151

4C8G

k8s-node1

k8s-node1

192.168.0.152

4C8G

k8s-node2

3 虚拟机

3.1 版本

VMware Workstation 15 Pro,本人安装时间较早。VM已免费,可自行下载高版本

3.2 安装ubuntu22.04

1. 下载成功后开始安装虚拟机,安装后的虚拟机可当作“模板虚拟机”(虚拟机网络选择桥接模式)

2. 保留“模板虚拟机”,深度克隆出安装Kubernetes的其他虚拟机(根据预计Kubernetes的规模确定虚拟机个数,本文个数为3)

3. 点击Clone(克隆)后,注意下图选择,最终由Ubuntu22.04深度克隆出Ubuntu22.04-master、Ubuntu22.04-node1、Ubuntu22.04-node2。后文 Ubuntu22.04-master、Ubuntu22.04-node1、Ubuntu22.04-node2 分别简称master、node1、node2

4. 新增 /etc/hosts 配置, master、node1、node2 的 hostname 为k8s-master、k8s-node1、k8s-node2

3.3 Shell连接master

1. master 初次安装后,网络默认为自动获取IP地址,但目前并不知道IP是多少,shell工具也无法登录

2. 使用 ifconfig 命令查看IP(工具依赖需安装:apt install net-tools)

3. VM终端复制、粘贴功能不友好,通过IP可shell登录,提升对master、node1、node2的配置效率,先用安装虚拟机时设置的普通用户登录master,登录后 sudo passwd root 设置 root 密码,登录前需通过VM终端配置root登录的权限 参考

4. 配置master静态IP,vim /etc/netplan/00-installer-config.yaml 参考

修改前

# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      dhcp4: true
  version: 2

修改后

# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      addresses:
        - 192.168.0.150/24
      routes:
      - to: default
        via: 192.168.0.1
      nameservers:
        addresses:
          - 114.114.114.114
        search:
          - 114.114.114.114
  version: 2

注意:规划IP时,IP地址分配千万注意不要与主机IP以及其他IP冲突

3.4 Shell连接node1、node2

参考shell连接master即可

3.5 禁用swap

参考

4. 安装Kubernetes

官网 参考

4.1 安装 containerd 1.7.23

1. 多节点同时下载contaienrd,同时检查其他节点是否下载成功

下载containerd命令:

wget https://github.com/containerd/containerd/releases/download/v1.7.23/containerd-1.7.23-linux-amd64.tar.gz

2. 解压缩:tar Cxzvf /usr/local containerd-2.0.0-linux-amd64.tar.gz;同时检查其他节点

root@k8s-master:~# ll /usr/local/bin
total 134200
drwxr-xr-x  2 root root     4096 Oct 14 20:42 ./
drwxr-xr-x 10 root root     4096 Feb 17  2023 ../
-rwxr-xr-x  1 root root 55716064 Oct 14 20:42 containerd*
-rwxr-xr-x  1 root root  6336664 Oct 14 20:42 containerd-shim*
-rwxr-xr-x  1 root root  7438488 Oct 14 20:42 containerd-shim-runc-v1*
-rwxr-xr-x  1 root root 12492952 Oct 14 20:42 containerd-shim-runc-v2*
-rwxr-xr-x  1 root root 26560840 Oct 14 20:42 containerd-stress*
-rwxr-xr-x  1 root root 28849992 Oct 14 20:42 ctr*

3.  配置 containerd 配置文件,执行 containerd config default 获取默认配置内容,配置到 /etc/containerd/config.toml

root@k8s-master:~# containerd config default
disabled_plugins = []
imports = []
oom_score= 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

......

4.  配置 systemd cgroup 驱动 参考

root@k8s-master:~# cat /etc/containerd/config.toml | grep SystemdCgroup
            SystemdCgroup = true

 5. 配置containerd systemd service,参考 启动服务成功

root@k8s-master:~# systemctl start containerd
root@k8s-master:~# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-12-15 05:36:12 UTC; 4h 31min ago
       Docs: https://containerd.io
   Main PID: 923 (containerd)
      Tasks: 122
     Memory: 318.5M
        CPU: 31min 27.510s
     CGroup: /system.slice/containerd.service
             ├─   923 /usr/local/bin/containerd

4.2 安装 runc 1.2.2

1. 参考 

下载runc1.2.2:

wget https://github.com/opencontainers/runc/releases/download/v1.2.2/runc.amd64

2. 按照参考中进行解压即可

4.3 安装 CNI 插件 1.6.0

 1. 参考

下载cni插件命令:

wget https://github.com/containernetworking/plugins/releases/download/v1.6.0/cni-plugins-linux-amd64-v1.6.0.tgz

4.4 安装 kubeadm

1. 准备开始

2. 确保每个节点上 MAC 地址和 product_uuid 唯一:cat /sys/class/dmi/id/product_uuid

3. 检查网络适配器:执行命令 echo 1 > /proc/sys/net/ipv4/ip_forward

4. 检查端口:nc 127.0.0.1 6443 -v

5. 安装容器运行时,上面已经安装contaienrd

6. 安装 kubeadm、kubelet 和 kubectl 参考

  • 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包

sudo apt-get update

# apt-transport-https 可能是一个虚拟包(dummy package);如果是的话,你可以跳过安装这个包sudo apt-get install -y apt-transport-https ca-certificates curl gpg

  • 下载用于 Kubernetes 软件包仓库的公共签名密钥。所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

  • 添加 Kubernetes apt 仓库

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

  • 添加 Kubernetes apt 仓库

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

  • 更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl,并锁定其版本

sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl

7. 此时 kubelet 处于不断重启状态,等待 kubeadm 的指令

8. 运行 crictl 命令,如 crictl ps 如下警告

  • 运行时和镜像的默认 endpoints 配置已经,需要手动设置

  • 解决方案2种,环境变量 或 配置crictl.yaml

环境变量:

export CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock export IMAGE_SERVICE_ENDPOINT=unix:///run/containerd/containerd.sock


配置crictl.yaml

cat <<EOF> /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF

  配置说明

runtime-endpoint = "unix:///run/containerd/containerd.sock" # 指定 containerd 的运行时端点,使用本地 Unix 套接字进行通信
image-endpoint = "unix:///run/containerd/containerd.sock" # 指定 containerd 的镜像端点,同样使用本地 Unix 套接字进行通信
timeout = 2 # 设置操作的超时时间(单位为秒)
debug = false # 是否开启调试模式
pull-image-on-create = false # 是否在创建时立即拉取镜像(未配置)
systemctl restart containerd
  • 再次执行,警告解除

9. kubeadm排障参考

4.5 kubeadm 部署集群

1. 从这里开始要关闭 Xshell 发送到所有回话的开关,master与node1,node2操作不同,需要相同操作再次打开即可

2.kubeadm 部署命令参考

通过 --image-repository=registry.aliyuncs.com/google_containers 使用阿里云镜像,部署命令如下

kubeadm init \
 --image-repository=registry.aliyuncs.com/google_containers \
 --pod-network-cidr=172.16.0.0/16 \
 --apiserver-advertise-address=192.168.0.150 \
 --v=5

 

root@k8s-master:~# kubeadm init  --image-repository=registry.aliyuncs.com/google_containers  --pod-network-cidr=172.16.0.0/16  --apiserver-advertise-address=192.168.0.150  --v=5
I1215 07:12:44.619343  168455 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I1215 07:12:44.619533  168455 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I1215 07:12:44.626979  168455 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
W1215 07:12:46.328785  168455 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": read tcp [2409:8a00:24d0:46e4:20c:29ff:fecc:ec79]:39864->[2600:1901:0:26f3::]:443: read: connection reset by peer
W1215 07:12:46.328866  168455 version.go:105] falling back to the local client version: v1.30.8
[init] Using Kubernetes version: v1.30.8
[preflight] Running pre-flight checks
I1215 07:12:46.330131  168455 checks.go:561] validating Kubernetes and kubeadm version
I1215 07:12:46.330182  168455 checks.go:166] validating if the firewall is enabled and active
I1215 07:12:46.357710  168455 checks.go:201] validating availability of port 6443
I1215 07:12:46.358399  168455 checks.go:201] validating availability of port 10259
I1215 07:12:46.358486  168455 checks.go:201] validating availability of port 10257
I1215 07:12:46.358534  168455 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1215 07:12:46.358598  168455 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1215 07:12:46.358612  168455 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1215 07:12:46.358621  168455 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1215 07:12:46.358635  168455 checks.go:428] validating if the connectivity type is via proxy or direct
I1215 07:12:46.358719  168455 checks.go:467] validating http connectivity to first IP address in the CIDR
I1215 07:12:46.358803  168455 checks.go:467] validating http connectivity to first IP address in the CIDR
I1215 07:12:46.359073  168455 checks.go:102] validating the container runtime
I1215 07:12:46.411452  168455 checks.go:637] validating whether swap is enabled or not
I1215 07:12:46.411735  168455 checks.go:368] validating the presence of executable crictl
I1215 07:12:46.411824  168455 checks.go:368] validating the presence of executable conntrack
I1215 07:12:46.411889  168455 checks.go:368] validating the presence of executable ip
I1215 07:12:46.411958  168455 checks.go:368] validating the presence of executable iptables
I1215 07:12:46.412562  168455 checks.go:368] validating the presence of executable mount
I1215 07:12:46.412611  168455 checks.go:368] validating the presence of executable nsenter
I1215 07:12:46.412678  168455 checks.go:368] validating the presence of executable ethtool
I1215 07:12:46.412707  168455 checks.go:368] validating the presence of executable tc
I1215 07:12:46.412734  168455 checks.go:368] validating the presence of executable touch
I1215 07:12:46.412764  168455 checks.go:514] running all checks
I1215 07:12:46.434582  168455 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost
I1215 07:12:46.434726  168455 checks.go:603] validating kubelet version
I1215 07:12:46.542931  168455 checks.go:128] validating if the "kubelet" service is enabled and active
I1215 07:12:46.564304  168455 checks.go:201] validating availability of port 10250
I1215 07:12:46.564624  168455 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1215 07:12:46.564811  168455 checks.go:201] validating availability of port 2379
I1215 07:12:46.564862  168455 checks.go:201] validating availability of port 2380
I1215 07:12:46.564946  168455 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1215 07:12:46.566993  168455 checks.go:830] using image pull policy: IfNotPresent
I1215 07:12:47.692074  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.8
I1215 07:12:47.766197  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.8
I1215 07:12:47.814937  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.8
I1215 07:12:47.870754  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.30.8
I1215 07:12:47.922246  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/coredns:v1.11.3
I1215 07:12:48.586641  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/pause:3.9
I1215 07:12:48.631675  168455 checks.go:862] image exists: registry.aliyuncs.com/google_containers/etcd:3.5.15-0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1215 07:12:48.632405  168455 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1215 07:12:48.833143  168455 certs.go:483] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.150]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1215 07:12:49.404512  168455 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1215 07:12:49.656210  168455 certs.go:483] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1215 07:12:50.444228  168455 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1215 07:12:50.721789  168455 certs.go:483] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.150 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1215 07:12:51.853625  168455 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1215 07:12:52.465876  168455 kubeconfig.go:112] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1215 07:12:52.979622  168455 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I1215 07:12:53.422645  168455 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1215 07:12:53.619393  168455 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1215 07:12:54.483221  168455 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1215 07:12:54.868720  168455 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1215 07:12:54.868908  168455 manifests.go:103] [control-plane] getting StaticPodSpecs
I1215 07:12:54.869810  168455 certs.go:483] validating certificate period for CA certificate
I1215 07:12:54.870005  168455 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1215 07:12:54.870076  168455 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I1215 07:12:54.870094  168455 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1215 07:12:54.870107  168455 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1215 07:12:54.870119  168455 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I1215 07:12:54.870130  168455 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I1215 07:12:54.876713  168455 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1215 07:12:54.876889  168455 manifests.go:103] [control-plane] getting StaticPodSpecs
I1215 07:12:54.877720  168455 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1215 07:12:54.877988  168455 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I1215 07:12:54.878008  168455 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1215 07:12:54.878018  168455 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1215 07:12:54.878028  168455 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1215 07:12:54.878036  168455 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1215 07:12:54.878044  168455 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I1215 07:12:54.878054  168455 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I1215 07:12:54.880685  168455 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1215 07:12:54.880874  168455 manifests.go:103] [control-plane] getting StaticPodSpecs
I1215 07:12:54.881883  168455 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1215 07:12:54.884406  168455 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1215 07:12:54.884552  168455 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.503212515s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.004358418s
I1215 07:13:02.950436  168455 kubeconfig.go:608] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists
I1215 07:13:02.956102  168455 kubeconfig.go:681] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf
I1215 07:13:02.977389  168455 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1215 07:13:03.001741  168455 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1215 07:13:03.029915  168455 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
I1215 07:13:03.030378  168455 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "k8s-master" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xkqp9q.ti1oioinh145vm3d
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1215 07:13:03.123071  168455 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I1215 07:13:03.123887  168455 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1215 07:13:03.124734  168455 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1215 07:13:03.132670  168455 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1215 07:13:03.351187  168455 request.go:629] Waited for 190.466888ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.0.150:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s
I1215 07:13:03.360395  168455 kubeletfinalize.go:91] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1215 07:13:03.362716  168455 kubeletfinalize.go:145] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I1215 07:13:04.351345  168455 request.go:629] Waited for 194.522734ms due to client-side throttling, not priority and fairness, request: POST:https://192.168.0.150:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.150:6443 --token xkqp9q.ti1oioinh145vm3d \
        --discovery-token-ca-cert-hash sha256:c3bd7dd4f6f49d9204ca460e1909b3e3f099a4ccf6ae0775067881aeb94e265b 

3. 使用kubectl 命令前,按照部署后的提示配置kube config

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

4. 部署成功,查看节点

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES           AGE   VERSION
k8s-master   NotReady   control-plane   77s   v1.30.8

5. node1节点加入集群

root@k8s-node1:~# kubeadm join 192.168.0.150:6443 --token 60x9oy.59k47hec5xxc802w \
> --discovery-token-ca-cert-hash sha256:185cadb3e6a8e1a99b994d5b94dbcf84721cac9b85069ed3b1b3fb832078b761 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002555673s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6. node2节点加入集群

root@k8s-node2:~# kubeadm join 192.168.0.150:6443 --token 60x9oy.59k47hec5xxc802w \
> --discovery-token-ca-cert-hash sha256:185cadb3e6a8e1a99b994d5b94dbcf84721cac9b85069ed3b1b3fb832078b761 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.002711976s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

7. 查看节点

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   2m21s   v1.30.8
k8s-node1    NotReady   <none>          33s     v1.30.8
k8s-node2    NotReady   <none>          5s      v1.30.8

4.6 安装 Calico

1. k8s1.30 calico需要做新版本

2. 每个节点都需要下载calico镜像,版本v3.29.1,仓库地址:https://docker.aityp.com

ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.29.1
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.29.1
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.29.1

ctr -n k8s.io images tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.29.1 docker.io/calico/cni:v3.29.1
ctr -n k8s.io images tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.29.1 docker.io/calico/node:v3.29.1
ctr -n k8s.io images tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.29.1 docker.io/calico/kube-controllers:v3.29.1

ctr -n k8s.io images delete swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/cni:v3.29.1
ctr -n k8s.io images delete swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/node:v3.29.1
ctr -n k8s.io images delete swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/calico/kube-controllers:v3.29.1

3.  下载calico编排文件

  • 需要将下两行注释到来,value 配置为 kubeadm init 时 --pod-network-cidr 参数对应的值

  • kubectl apply -f calico.yaml 后查看 pod 运行情况

至此kubernetes v1.3.0部署完成!!!

标签:168455,12,07,Kubernetes,I1215,kubelet,22.04,v1.30,go
From: https://blog.csdn.net/qq_25542947/article/details/144491691

相关文章

  • 如何在 Ubuntu 22.04 上使用 vnStat 监控网络流量
    简介vnStat是一个免费的、开源的、基于控制台的Linux操作系统网络流量监控工具。通过vnStat,你可以在不同的时间段监控网络统计数据。它简单、轻量级,并且消耗的系统资源很小。vnStat允许你按小时、日、月、周和日生成网络流量数据。本教程将向你展示如何在Ubuntu22.04上安......
  • Kubernetes Service 详解:如何轻松管理集群中的服务
    KubernetesService详解:如何轻松管理集群中的服务在Kubernetes中,Service是一个非常核心的概念。它解决了容器之间的通信问题,确保了无论容器如何启动或销毁,服务都能保持稳定的访问方式。今天,我想通过一篇简单易懂的文章,带大家一起探讨一下Kubernetes中的Service,它的作用......
  • Ubuntu22.04 LTS 部署harbor-v2.7.2高可用
    Ubuntu22.04LTS部署harbor高可用环境准备均需要docker环境IP主机名10.0.0.20harbor0110.0.0.21harbor02一、harbor环境部署1.下载harbor包[root@harbor01:~]#wgethttps://github.com/goharbor/harbor/releases/download/v2.7.2/harbor-offline-insta......
  • 单ubuntu22.04系统工作台降级版本重装ubuntu20.04(全网最详细-简单易懂)
        由于前段时间在配置开源框架时候,官方支持18.04或者20.04,但是本人ubuntu系统是22.04,故运行中问题层出,故想着重装一下系统,把版本降到常用的20.04(推荐),在网上找相关单ubuntu系统重装的内容的时候,发现类似的完整过程居然没有,大多数都是关于Windows双系统的安装,所以笔者决......
  • 深入浅出 Kubernetes Deployment 滚动更新策略
    深入浅出KubernetesDeployment滚动更新策略在Kubernetes中,Deployment是管理无状态应用的一种重要资源类型,而滚动更新(RollingUpdate)是Deployment默认的升级方式。滚动更新通过逐步替换旧版本的Pod,确保应用升级时集群始终保持可用。而在这个过程中,Service的流量分......
  • 企业用户在使用Kubernetes时,面临成本和架构选择的挑战
    企业用户在使用Kubernetes时,面临成本和架构选择的挑战企业用户在使用Kubernetes时,面临的成本和架构选择的挑战主要体现在以下几个方面:基础设施成本:Kubernetes的部署通常需要强大的基础设施支持,包括计算资源、存储和网络设备。企业需要评估现有的IT基础设施是否能够支持......
  • 【K8s】专题十五(5):Kubernetes 网络之 CoreDNS
    本文内容均来自个人笔记并重新梳理,如有错误欢迎指正!如果对您有帮助,烦请点赞、关注、转发、订阅专栏!专栏订阅入口| 精选文章 | Kubernetes |Docker|Linux |羊毛资源 | 工具推荐 |往期精彩文章【Docker】(全网首发)KylinV10下MySQL容器内存占用异常的解决......
  • 【kubernetes】k8s集群的简述与搭建
    简述Kubernetes(简称K8s)是一个开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序关键特性自动化部署和回滚:Kubernetes可以自动化地部署和回滚应用程序,确保应用程序始终处于预期的状态。服务发现和负载均衡:Kubernetes提供内置的服务发现和负载均衡功能,确保流......
  • ubuntu22.04软件安装问题实战解决
    root@h2-2-gpu:~#apt-getinstallnvidia-driver-535=535.183.01-0ubuntu0.22.04.1Readingpackagelists...DoneBuildingdependencytree...DoneReadingstateinformation...DoneSomepackagescouldnotbeinstalled.Thismaymeanthatyouhaverequestedan......
  • Kubernetes集群巡检内容
    1.概述Kubernetes集群巡检是一种监测和评估底层系统运行状况的重要手段,旨在快速发现系统中存在的潜在风险并提供修复建议。通过对Kubernetes(K8s)集群进行定期巡检,可以有效保障集群稳定性、优化资源利用率、提升安全性,并降低运维风险,特别是在生产环境中,这种预防性措施尤为重......