首页 > 其他分享 >Kubernetes v1.28.2 & Calico eBPF

Kubernetes v1.28.2 & Calico eBPF

时间:2024-08-25 15:37:15浏览次数:12  
标签:03 Kubernetes checks 107890 I1018 41 go v1.28 Calico

集群初始化简略步骤


  1. 初始化集群
    kubeadm init \
        --skip-phases=addon/kube-proxy \
        --apiserver-cert-extra-sans=35.229.220.159,127.0.0.1,10.0.0.3,10.0.0.4,10.0.0.5,10.254.0.2 \
        --control-plane-endpoint=apiserver.unlimit.club \
        --apiserver-advertise-address=10.0.0.3 \
        --pod-network-cidr=172.21.0.0/20 \
        --service-cidr=10.10.10.0/24 \
        --kubernetes-version=v1.28.2 \
        --upload-certs \
        --v=5
    I1018 03:41:25.129767  107890 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
    I1018 03:41:25.129880  107890 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
    I1018 03:41:25.143093  107890 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
    [init] Using Kubernetes version: v1.28.2
    [preflight] Running pre-flight checks
    I1018 03:41:25.414700  107890 checks.go:563] validating Kubernetes and kubeadm version
    I1018 03:41:25.414779  107890 checks.go:168] validating if the firewall is enabled and active
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    I1018 03:41:25.444812  107890 checks.go:203] validating availability of port 6443
    I1018 03:41:25.445435  107890 checks.go:203] validating availability of port 10259
    I1018 03:41:25.445538  107890 checks.go:203] validating availability of port 10257
    I1018 03:41:25.445623  107890 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
    I1018 03:41:25.445660  107890 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
    I1018 03:41:25.445692  107890 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
    I1018 03:41:25.445712  107890 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
    I1018 03:41:25.445782  107890 checks.go:430] validating if the connectivity type is via proxy or direct
    I1018 03:41:25.445852  107890 checks.go:469] validating http connectivity to first IP address in the CIDR
    I1018 03:41:25.445888  107890 checks.go:469] validating http connectivity to first IP address in the CIDR
    I1018 03:41:25.445920  107890 checks.go:104] validating the container runtime
    I1018 03:41:25.570150  107890 checks.go:639] validating whether swap is enabled or not
    I1018 03:41:25.570510  107890 checks.go:370] validating the presence of executable crictl
    I1018 03:41:25.570568  107890 checks.go:370] validating the presence of executable conntrack
    I1018 03:41:25.570840  107890 checks.go:370] validating the presence of executable ip
    I1018 03:41:25.570952  107890 checks.go:370] validating the presence of executable iptables
    I1018 03:41:25.571347  107890 checks.go:370] validating the presence of executable mount
    I1018 03:41:25.571499  107890 checks.go:370] validating the presence of executable nsenter
    I1018 03:41:25.571586  107890 checks.go:370] validating the presence of executable ebtables
    I1018 03:41:25.571641  107890 checks.go:370] validating the presence of executable ethtool
    I1018 03:41:25.571678  107890 checks.go:370] validating the presence of executable socat
    I1018 03:41:25.571746  107890 checks.go:370] validating the presence of executable tc
        [WARNING FileExisting-tc]: tc not found in system path
    I1018 03:41:25.572021  107890 checks.go:370] validating the presence of executable touch
    I1018 03:41:25.572148  107890 checks.go:516] running all checks
    I1018 03:41:25.589492  107890 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
    I1018 03:41:25.589605  107890 checks.go:605] validating kubelet version
    I1018 03:41:25.659374  107890 checks.go:130] validating if the "kubelet" service is enabled and active
    I1018 03:41:25.696159  107890 checks.go:203] validating availability of port 10250
    I1018 03:41:25.696458  107890 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    I1018 03:41:25.696548  107890 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
    I1018 03:41:25.696593  107890 checks.go:203] validating availability of port 2379
    I1018 03:41:25.696715  107890 checks.go:203] validating availability of port 2380
    I1018 03:41:25.696787  107890 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    I1018 03:41:25.697200  107890 checks.go:828] using image pull policy: IfNotPresent
    I1018 03:41:25.734954  107890 checks.go:854] pulling: registry.k8s.io/kube-apiserver:v1.28.2
    I1018 03:41:28.384613  107890 checks.go:854] pulling: registry.k8s.io/kube-controller-manager:v1.28.2
    I1018 03:41:30.335595  107890 checks.go:854] pulling: registry.k8s.io/kube-scheduler:v1.28.2
    I1018 03:41:31.649011  107890 checks.go:854] pulling: registry.k8s.io/kube-proxy:v1.28.2
    I1018 03:41:33.709315  107890 checks.go:854] pulling: registry.k8s.io/pause:3.9
    I1018 03:41:34.241124  107890 checks.go:854] pulling: registry.k8s.io/etcd:3.5.9-0
    I1018 03:41:38.606711  107890 checks.go:854] pulling: registry.k8s.io/coredns/coredns:v1.10.1
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    I1018 03:41:40.551893  107890 certs.go:112] creating a new certificate authority for ca
    [certs] Generating "ca" certificate and key
    I1018 03:41:40.649095  107890 certs.go:519] validating certificate period for ca certificate
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [apiserver.unlimit.club kubernetes kubernetes-cp-1 kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.10.1 10.0.0.3 35.229.220.159 127.0.0.1 10.0.0.4 10.0.0.5 10.254.0.2]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    I1018 03:41:41.154211  107890 certs.go:112] creating a new certificate authority for front-proxy-ca
    [certs] Generating "front-proxy-ca" certificate and key
    I1018 03:41:41.416584  107890 certs.go:519] validating certificate period for front-proxy-ca certificate
    [certs] Generating "front-proxy-client" certificate and key
    I1018 03:41:41.651344  107890 certs.go:112] creating a new certificate authority for etcd-ca
    [certs] Generating "etcd/ca" certificate and key
    I1018 03:41:41.789558  107890 certs.go:519] validating certificate period for etcd/ca certificate
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kubernetes-cp-1 localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kubernetes-cp-1 localhost] and IPs [10.0.0.3 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    I1018 03:41:42.434726  107890 certs.go:78] creating new public/private key files for signing service account users
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    I1018 03:41:42.585528  107890 kubeconfig.go:103] creating kubeconfig file for admin.conf
    [kubeconfig] Writing "admin.conf" kubeconfig file
    I1018 03:41:43.067844  107890 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    I1018 03:41:43.270316  107890 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    I1018 03:41:43.475217  107890 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    I1018 03:41:43.800707  107890 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    I1018 03:41:43.800753  107890 manifests.go:102] [control-plane] getting StaticPodSpecs
    I1018 03:41:43.801607  107890 certs.go:519] validating certificate period for CA certificate
    I1018 03:41:43.801673  107890 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
    I1018 03:41:43.801681  107890 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
    I1018 03:41:43.801687  107890 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
    I1018 03:41:43.805283  107890 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    I1018 03:41:43.805319  107890 manifests.go:102] [control-plane] getting StaticPodSpecs
    I1018 03:41:43.805609  107890 manifests.go:128] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
    I1018 03:41:43.805631  107890 manifests.go:128] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
    I1018 03:41:43.805641  107890 manifests.go:128] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
    I1018 03:41:43.805653  107890 manifests.go:128] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
    I1018 03:41:43.805667  107890 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
    I1018 03:41:43.806532  107890 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    I1018 03:41:43.806560  107890 manifests.go:102] [control-plane] getting StaticPodSpecs
    I1018 03:41:43.806817  107890 manifests.go:128] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
    I1018 03:41:43.807395  107890 manifests.go:157] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    I1018 03:41:43.807498  107890 kubelet.go:67] Stopping the kubelet
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    I1018 03:41:44.286472  107890 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 10.003887 seconds
    I1018 03:41:54.298686  107890 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    I1018 03:41:54.317666  107890 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap
    [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
    I1018 03:41:54.333181  107890 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node
    I1018 03:41:54.333276  107890 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "kubernetes-cp-1" as an annotation
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    4c167ff95a4e098c5b73662e54f19b31fd3197735720331e2fb9538a10fd0941
    [mark-control-plane] Marking the node kubernetes-cp-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node kubernetes-cp-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
    [bootstrap-token] Using token: zcc3ju.p1yaig03rygqn5s0
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    I1018 03:41:55.428803  107890 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
    I1018 03:41:55.429548  107890 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
    I1018 03:41:55.429888  107890 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
    I1018 03:41:55.436524  107890 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
    I1018 03:41:55.446829  107890 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    I1018 03:41:55.449741  107890 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
    [addons] Applied essential addon: CoreDNS
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of the control-plane node running the following command on each as root:
    
      kubeadm join apiserver.unlimit.club:6443 --token zcc3ju.p1yaig03rygqn5s0 \
        --discovery-token-ca-cert-hash sha256:b6aa07c43ebcd4e3785985413acc22a2f20ff2fc37097dd1e54b3996eda38cbe \
        --control-plane --certificate-key 4c167ff95a4e098c5b73662e54f19b31fd3197735720331e2fb9538a10fd0941
    
    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join apiserver.unlimit.club:6443 --token zcc3ju.p1yaig03rygqn5s0 \
        --discovery-token-ca-cert-hash sha256:b6aa07c43ebcd4e3785985413acc22a2f20ff2fc37097dd1e54b3996eda38cbe

标签:03,Kubernetes,checks,107890,I1018,41,go,v1.28,Calico
From: https://www.cnblogs.com/apink/p/18379008

相关文章

  • Kubernetes自动扩缩容:实现高效资源管理
    在云计算和容器化时代,应用程序的弹性伸缩变得尤为重要。Kubernetes作为领先的容器编排平台,提供了强大的自动扩缩容(Autoscaling)功能,允许集群根据实时负载自动调整Pod的数量。本文将深入探讨Kubernetes中的自动扩缩容机制,包括其原理、实现方式以及最佳实践。Kubernetes自动扩......
  • 深入理解Kubernetes中的ConfigMap:配置管理的艺术
    在Kubernetes的世界中,配置管理是一个至关重要的部分,它允许开发者和运维人员将配置信息从容器镜像中分离出来,以便于更灵活地管理和更新应用。ConfigMap是Kubernetes提供的一种配置管理工具,它允许用户将配置数据存储在集群中,并且可以被Pods以多种方式使用。本文将详细介绍Conf......
  • kubernetes学习笔记
    基础环境系统镜像版本Centos7.6最小化最低运行环境基本要求内存及CPU:512MB/CPU1核K3s版本v1.30.0+k3s1集群规划:注意:需要对每台主机设置hostname,使用hostnamectlset-hostname主机名K8s-master192.168.200.1291C/1GK8s-worker1192.168.200.1302C/......
  • 【Kubernetes】Kubernetes 安装后.kube/config文件作用以及位置
    1.概述首先参考上一篇文章:【Flink】Mac下使用flink-kubernetes-operator本地运行flink程序在上一篇文章中我本地运行起来一个k8s案例,然后在我做使用代码提交任务到k8s的时候报错找不到/root/.kube/config然后我就突然我本地是不是不是这个目录呢?一找果然找到了lcc@lcc......
  • 【kubernetes】The LocalStreamEnvironment cannot be used when submitting
    1.概述新手上路,首先参考文章:【Flink】Mac下使用flink-kubernetes-operator本地运行flink程序在这个文章中,我们知道了如何使用demo提交flink任务。但是如果我们的机器没有kubectl命令,我们改怎么提交任务到flink呢?这里我们可以使用代码提交,此处文章参考:【kubernetes】使......
  • D10 kubernetes 容器监控检查之探针
    0、简介》 当pod状态显示为running,这表明pod中所有容器都已经运行,但这并不意味着pod中的应用程序已经准备好提供服务。实际上,running状态仅仅表示容器的启动状态,与应用程序是否准备好提供服务没有直接关系。可能由于以下原因,应用程序不能提供服务:-应用程序启动慢:容器已运行,但容......
  • D9 kubernetes 之pod中声明端口
    》 在pod配置中,ports字段用于定义容器公开的端口列表。该字段的值是一个对象列表类型,其中每个元素(对象)对应一个端口规则,每个端口规则由以下字段组成。name:端口名称。仅定义一个端口时,该字段可选containerPort:容器端口,容器内应用程序监听的端口protocol:端口使用的协议:TCP、UDP、......
  • D7 kubernetes 容器运行命令与参数
    》 在pod配置中,command和args字段用于定义容器的命令和参数1、command》 command字段用于定义容器启动时要执行的命令,并覆盖镜像中默认的启动命令。它的值是一个字符串列表类型,其中第一个元素视为命令名称,后续元素视为命令的参数command配置实例如下[root@k8s-masterk8s]#......
  • D6 kubernetes 中pod 常用命令
    常用的pod管理命令#创建一个podkubectlrunpod名称--image=镜像地址#查看当前命名空间中的pod对象kubectlgetpods#查看指定命名空间中的pod对象kubectlgetpods-n命名空间#查看所有命名空间中的podkubectlgetpods-A#查看pod中容器的日志,默认来自第......
  • D5 kubernetes 中pod资源常见字段及值类型(yaml)
    》pod资源包含许多配置字段,以提供更多的功能。以下是一些常见的配置字段和作用apiVersion:v1#api版本king:Pod#资源类型metadata:<Object>#资源元数据lables:......