首页 > 其他分享 >k8s flannel

k8s flannel

时间:2023-04-23 11:45:23浏览次数:43  
标签:00 kube -- I0423 go k8s flannel

------------恢复内容开始------------

k8s coredns ContainerCreating

failed: open /run/flannel/subnet.env: no such file or directory

 kube-flannel-ds-kjtd8  CrashLoopBackOff 

K8s23-公司自建环境
Swapoff
Firewalld
systemctl status firewalld
Systemctl stop firewalld
Systemctl disable firewalld

selinux
hfkmYL@58


k8s23-master    192.168.19.30
k8s23-node01    192.168.19.32
k8s23-node02    192.168.19.31
桥接ipv4流量传递到iptables链

net.ipv4.ip_forward = 1

net.ipv4.tcp_tw_recycle = 0

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.neigh.default.gc_thresh1=2048
net.ipv4.neigh.default.gc_thresh2=4096
net.ipv4.neigh.default.gc_thresh3=8192
fs.inotify.max_user_watches=524288
fs.inotify.max_user_instances=8192
vm.max_map_count=262144
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.netdev_max_backlog=16384
net.core.somaxconn=32768
net.ipv4.tcp_max_syn_backlog=8096
net.netfilter.nf_conntrack_tcp_be_liberal=1
net.netfilter.nf_conntrack_udp_timeout_stream=90


Yum repo 查看指定软件有那些版本可用
yum --showduplicates list kubelet
软件包与预期下载的不符
yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5

[root@k8s-master ~]# kubectl get pods -o wide
The connection to the server localhost:8080 was refused - did you specify the right host or port?

kubeadm init --apiserver-advertise-address=172.20.234.4 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

kubeadm init \--kubernetes-version=v1.23.5 \--image-repository registry.aliyuncs.com/google_containers \--pod-network-cidr=10.20.0.0/16 \--service-cidr=172.26.0.0/16 \--apiserver-advertise-address=192.168.19.30 \--ignore-preflight-errors=Swap


Docker 默认驱动
Kubenets 驱动
查看docker  cgroupdriver
docker info | grep -i “Cgroup Driver
查看k8s cgroupdriver
systemctl show --property=Environment kubelet | cat
.
vim /etc/sysconfig/kubelet 
.
.
.
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
.
.
KUBE_PROXY_MODE="ipvs"
.
curl -sSL http://localhost:10248/healthz



kubeadm join 192.168.19.30:6443 --token rf0152.j8k0awgxa8zr37jg \
        --discovery-token-ca-cert-hash sha256:c5a57c0b67112c28a08cc64c2bd7c6c53cc2cf65e716fce23e3e6209430fa38e 


kubectl apply -f [podnetwork].yaml

https://192.168.19.30:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s

kubeadm join 192.168.19.30:6443 --token rf0152.j8k0awgxa8zr37jg         --discovery-token-ca-cert-hash sha256:c5a57c0b67112c28a08cc64c2bd7c6c53cc2cf65e716fce23e3e6209430fa38e --v=5 --ignore-preflight-errors=all

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

[root@k8s23-node01 ~]#  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?



Node节点安装flannel
kubectl api-resources -o wide --namespaced=true

 

 kubectl -n kube-flannel logs kube-flannel-ds-kjtd8
I0423 00:49:32.318681       1 main.go:211] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true useMultiClusterCidr:false}
W0423 00:49:32.319455       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0423 00:49:32.334341       1 kube.go:144] Waiting 10m0s for node controller to sync
I0423 00:49:32.334459       1 kube.go:485] Starting kube subnet manager
I0423 00:49:32.338381       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.0.0/24]
I0423 00:49:32.339012       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.1.0/24]
I0423 00:49:32.339033       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.2.0/24]
I0423 00:49:33.335022       1 kube.go:151] Node controller sync successful
I0423 00:49:33.335063       1 main.go:231] Created subnet manager: Kubernetes Subnet Manager - k8s-master
I0423 00:49:33.335068       1 main.go:234] Installing signal handlers
I0423 00:49:33.335330       1 main.go:542] Found network config - Backend type: vxlan
I0423 00:49:33.335350       1 match.go:206] Determining IP address of default interface
I0423 00:49:33.335838       1 match.go:259] Using interface with name enp0s3 and address 192.168.19.30
I0423 00:49:33.335911       1 match.go:281] Defaulting external address to interface address (192.168.19.30)
I0423 00:49:33.336022       1 vxlan.go:140] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0423 00:49:33.336377       1 main.go:334] Error registering network: failed to acquire lease: subnet "10.244.0.0/16" specified in the flannel net config doesn't contain "10.20.0.0/24" PodCIDR of the "k8s-master" node
I0423 00:49:33.336509       1 main.go:522] Stopping shutdownHandler...
W0423 00:49:33.336540       1 reflector.go:347] github.com/flannel-io/flannel/pkg/subnet/kube/kube.go:486: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
 kubectl logs kube-flannel-ds-kjtd8 -n kube-flannel | grep CIDR
I0423 01:15:07.288694       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.0.0/24]
I0423 01:15:07.288796       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.1.0/24]
I0423 01:15:07.288819       1 kube.go:506] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.20.2.0/24]
E0423 01:15:08.289698       1 main.go:334] Error registering network: failed to acquire lease: subnet "10.244.0.0/16" specified in the flannel net config doesn't contain "10.20.0.0/24" PodCIDR of the "k8s-master" node

 

 

------------恢复内容结束------------

标签:00,kube,--,I0423,go,k8s,flannel
From: https://www.cnblogs.com/ruiy/p/17346045.html

相关文章

  • 利用Velero对K8S备份还原与集群迁移实战
    一、简介Velero是一款云原生时代的灾难恢复和迁移工具,采用Go语言编写,并在github上进行了开源,利用velero用户可以安全的备份、恢复和迁移Kubernetes集群资源和持久卷。开源地址:https://github.com/vmware-tanzu/velero官方文档:https://velero.io/docs/v1.11/1.1支......
  • k8s etcd 备份还原
    先根据etcd找到hostpatch持久化目录,/var/lib/etcd 进入pod备份:ETCDCTL_API=3etcdctlsnapshotsavesnap.db\--endpoints=https://127.0.0.1:2379\--cacert=/etc/kubernetes/pki/etcd/ca.crt\--cert=/etc/kubernetes/pki/etcd/server.crt\--key=/etc......
  • kubeatm安装k8s成功后的提示说明
    使用kubeadm安装完成k8s成功后,有一段提示信息如下:YourKubernetescontrol-planehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.ku......
  • k8s 基础命令
    kubectlgetnode获取节点 kubectlgetpod获取pod watch-n-1kubectlgetpodwatch是监控,每一秒获取一次pod信息 kubectlcreatedeploydeploy名字--image=镜像名字--replicas=5创建部署任务,replicas=5指创建五个副本 kubectldeletedeploydeploy名字删......
  • 彻底卸载k8s环境
    kubeadmreset-fmodprobe-ripiprm-rf~/.kube/rm-rf/etc/kubernetes/rm-rf/etc/systemd/system/kubelet.service.drm-rf/etc/systemd/system/kubelet.servicerm-rf/usr/bin/kube*rm-rf/etc/cnirm-rf/opt/cnirm-rf/var/lib/etcdrm-rf/var/etcdy......
  • K8s 日志高效查看神器,提升运维效率10倍!
    通常情况下,在部署了 K8S 服务之后,为了更好地监控服务的运行情况,都会接入对应的日志系统来进行检测和分析,比如常见的 Filebeat+ElasticSearch+Kibana 这一套组合来完成。虽然该组合可以满足我们对于服务监控的要求,但是如果只是部署一个内部单服务用的话,未免显得大材小用,而且......
  • 11、集群外部、内部的jenkins如何在k8s集群上创建动态pod agent
    Kubernetes插件能够让JenkinsMaster在Kubernetes集群上运行基于Pod的动态Agent◼它会为启动的每个Agent创建一个Pod,并在运行完成后停止它◼各PodAgent以InboundAgent形式运行,inbound-agent容器会自动连接到JenkinsMaster◆这意味着在每个PodAgent中,始终有一......
  • k8s-系列:1.镜像仓库harbor之ansible-playbook安装
    一.准备环境:1.centos7环境2.安装ansible环境3.harbor安装文件下载路径:   https://ghproxy.com/https://github.com/goharbor/harbor/releases/download/v2.5.3/harbor-offline-installer-v2.5.3.tgz4.harbor安装,作者用192.168.126.129作为harbor安装环境 harbor安装分......
  • Centos7 yum安装k8s 1.23.0
    本次部署有3个节点,一个master,2个node。其中maser是192.168.18.11,node分别是192.168.18.12、192.168.18.12。 一、在master:192.168.18.11上1、前提条件安装docker,并修改/etc/docker/daemon.json{"registry-mirrors":["https://qtlj897j.mirror.aliyuncs.com"],#添加的......
  • k8s将pod指定到某个节点
    1、查看节点标签kubectlgetnodes--show-labels2、给节点打标签kubectllabelnode10.64.39.219node=bmdkubectllabelnode10.64.39.186node=bmd3、指定程序的运行pod[root@apiserverk8s]#catselec.ymlapiVersion:extensions/v1beta1kind:Deployment......