首页 > 其他分享 >k8s集群-CNI网络插件(Calico 和 Flannel)

k8s集群-CNI网络插件(Calico 和 Flannel)

时间:2023-05-04 17:58:40浏览次数:60  
标签:node 插件 k8s name etcd flannel kube calico Flannel

1)部署flannel网络(主节点服务器)

在主节点服务器上查看子节点状态为NotReady

[root@k8s-master01-15 ~]# kubectl get node
NAME               STATUS     ROLES    AGE   VERSION
k8s-master01-15    NotReady   master   20m   v1.20.11
k8s-node01-16      NotReady      19m   v1.20.11
k8s-node02-17      NotReady      19m   v1.20.11

检查方法

执行命令kubectl get nodes 查询状态没有启动 
查看日志journalctl -u kubelet
Unable to update cni config: no networks found in /etc/cni/net.d

出现这种报错的 是没有安装网络插件,可以往下部署flannel网络。或者等待一会状态会改为Ready

# 在master机器上执行
# 1、创建整理安装所需的文件夹
mkdir -p /data/script/kubernetes/install-k8s/core/ && cd /data/script/kubernetes/
# 2、将主要的文件放入文件夹中
mv /data/script/kubeadm-init.log /data/script/kubeadm-config.yaml /data/script/kubernetes/install-k8s/core/
# 3、创建flannel文件夹
cd /data/script/kubernetes/install-k8s/ && mkdir -p /data/kubernetes/install-k8s/plugin/flannel/ && cd /data/kubernetes/install-k8s/plugin/flannel/   

下载kube-flannel.yml文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 下载命令的打印结果
--2021-07-01 18:10:44--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14366 (14K) [text/plain]
Saving to: ‘kube-flannel.yml’
kube-flannel.yml              100%[================================================>]  14.03K  --.-KB/s    in 0.05s   
2021-07-01 18:15:00 (286 KB/s) - ‘kube-flannel.yml’ saved [14366/14366]

执行安装flannel网络插件

# 先拉取镜像,此过程国内速度比较慢
docker pull quay.io/coreos/flannel:v0.14.0

编辑 kube-flannel.yml 网卡的配置 (master机器)

vim kube-flannel.yml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0        # 如机器有多个网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网卡
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
# 创建flannel
kubectl create -f kube-flannel.yml

# 创建命令的打印结果
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created


# 查看pod, 可以看到flannel组件已经运行起来了. 默认系统组件都安装在 kube-system 这个命名空间(namespace)下
[root@k8s-master01-15 flannel]# kubectl get pod -n kube-system
NAME                                       READY   STATUS               RESTARTS         AGE
coredns-66bff467f8-tlqdw                   1/1     Running              0                18m
coredns-66bff467f8-zpg4q                   1/1     Running              0                18m
etcd-k8s-master01-15                       1/1     Running              0                18m
kube-apiserver-k8s-master01-15             1/1     Running              0                18m
kube-controller-manager-k8s-master01-15    1/1     Running              0                18m
kube-flannel-ds-6lbmw                      1/1     Running              15 (5m30s ago)   59m
kube-flannel-ds-97mkh                      0/1     CrashLoopBackOff     14 (4m58s ago)   59m
kube-flannel-ds-fthvm                      0/1     Running              15 (5m26s ago)   59m
kube-proxy-4jj7b                           0/1     CrashLoopBackOff     0                4m9s
kube-proxy-ksltf                           0/1     CrashLoopBackOff     0                4m9s
kube-proxy-w8dcr                           0/1     CrashLoopBackOff     0                4m9s
kube-scheduler-k8s-master01-15             1/1     Running              0                18m


# 再次查看node, 发现状态已经变成了 Ready
[root@k8s-master01-15 flannel]# kubectl get node
NAME               STATUS   ROLES    AGE   VERSION
k8s-master01-15    Ready    master   19m   v1.20.11

这个如何卸载 flannel.yml (delete )

[root@k8s-master01-15 flannel]# kubectl delete -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy "psp.flannel.unprivileged" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted

2)部署Calico网络插件

Calico遇到的哪些问题,并如何处理呢?

https://www.jianshu.com/p/8b4c3ac2db6f

从Calico官网下载资源文件:

cat calico-etcd.yaml
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # The keys below should be uncommented and the values populated with the base64
  # encoded contents of each file that would be associated with the TLS data.
  # Example command for encoding a file contents: cat  | base64 -w 0
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "http://:"
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"
  # Configure the MTU to use for workload interfaces and tunnels.
  # - If Wireguard is enabled, set to your network MTU - 60
  # - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
  # - Otherwise, if IPIP is enabled, set to your network MTU - 20
  # - Otherwise, if not using any encapsulation, set to your network MTU.
  veth_mtu: "1440"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/calico-kube-controllers-rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Pods are monitored for changing labels.
  # The node controller monitors Kubernetes nodes.
  # Namespace and serviceaccount labels are used for policy.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
      - get
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---

---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: registry.cn-beijing.aliyuncs.com/dotbalo/cni:v3.15.3
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: registry.cn-beijing.aliyuncs.com/dotbalo/pod2daemon-flexvol:v3.15.3
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: registry.cn-beijing.aliyuncs.com/dotbalo/node:v3.15.3
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Enable or Disable VXLAN on the default IP pool.
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the VXLAN tunnel device.
            - name: FELIX_VXLANMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the Wireguard tunnel device.
            - name: FELIX_WIREGUARDMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-kube-controllers
          image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.15.3
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml

下面只列出修改的部分:

$ vim calico-etcd.yaml
...
# 这里反引号包裹的内容表示需要执行它将其结果替换到此处
# kubeadm目录路径:/opt/etcd/ssl/{server-key.pem,ca.pem,server-key.pem}
# 二进制目录路径:/opt/k8s_tls/etcd/{server-key.pem,server.pem,ca.pem}
  # etcd 证书私钥
  etcd-key:  # 需要执行 cat /root/TLS/etcd/server-key.pem | base64 -w 0 的命令
  # etcd 证书
  etcd-cert: # 需要执行 cat /root/TLS/etcd/server.pem | base64 -w 0 的命令
  # etcd CA 证书
  etcd-ca:   # 需要执行 cat /root/TLS/etcd/ca.pem | base64 -w 0 的命令
...
  # 需要-部署外置的ETCD的集群:https://www.jianshu.com/p/fbec19c20454
  # ETCD集群的地址
  etcd_endpoints: "https://172.23.199.15:2379,https://172.23.199.16:2379,https://172.23.199.17:2379"
  etcd_ca: "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"
...
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # 新增部分
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eth0"
            # value: "interface=eth.*"
            # value: "interface=can-reach=www.baidu.com"
            # 新增部分结束
            - name: IP
              value: "autodetect"
            # 禁止使用 IPIP 模式
            - name: CALICO_IPV4POOL_IPIP
              value: "Never"
            # 设置 Pod IP 地址段,此处 value 应该与之前配置的 kubeadm-config.yaml 或 二进制hosts.yaml 中的 kuadm的 podSubnet 或 二进制的pod_net 变量值一致
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
...
        # 修改 cni 插件二进制文件映射到宿主机的目录,此处 /opt/apps 与 hosts.yaml 中的 install_dir 变量值一致
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin #/opt/apps/cni/bin
        # 修改 cni 配置目录为手动指定的目录,此处 /opt/apps 与 hosts.yaml 中的 install_dir 变量值一致
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d #/opt/apps/cni/conf
        # 修改 cni 日志目录为手动指定的目录,此处 /opt/apps 与 hosts.yaml 中的 install_dir 变量值一致
        - name: cni-log-dir
          hostPath:
            path: /var/log/calico/cni #/opt/apps/cni/log
        # 修改此卷的挂载权限为 0440,有两处
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0440

应用修改好的资源文件:

kubectl apply -f calico-etcd.yaml 
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

它会在 kube-system 命名空间下启动如下 Pod:

[root@k8s-master kubeadm]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d65bcc55-k2d8j   1/1     Running   0          18h
calico-node-7jbk2                         1/1     Running   0          18h
calico-node-ffbwh                         1/1     Running   0          18h
calico-node-rl4dw                         1/1     Running   0          18h

部署Calicoctl 插件的流程并使用

1)环境信息
k8s集群为1.20.11版本,calico版本为3.20.0版本

2)下载calicoctl二进制文件

wget https://github.com/projectcalico/calicoctl/releases/download/v3.20.0/calicoctl
cp calicoctl /usr/bin
chmod +x /usr/bin/calicoctl

3)命令行测试

[root@k8s-master-15 ~]# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+------------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+--------------+-------------------+-------+------------+-------------+
| 172.23.199.16| node-to-node mesh | up    | 20xx-xx-xx | Established |
| 172.23.199.17| node-to-node mesh | up    | 20xx-xx-xx | Established |
+--------------+-------------------+-------+------------+-------------+

4)配置文件测试

#1、编辑配置文件
[root@k8s-master-15 ~]# mkdir -p /etc/calico/
[root@k8s-master-15 ~]# vim /etc/calico/calicoctl.cfg
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "~/.kube/config"

#2、测试命令如下
[root@k8s-master-15 ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+------------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |   SINCE    |    INFO     |
+--------------+-------------------+-------+------------+-------------+
| 172.23.199.16| node-to-node mesh | up    | 20xx-xx-xx | Established |
| 172.23.199.17| node-to-node mesh | up    | 20xx-xx-xx | Established |
+--------------+-------------------+-------+------------+-------------+

说明:node-node mesh: 代表所有节点用full mesh的bgp连接

[root@k8s-master ~]# netstat -anp | grep ESTABLISH | grep bird
tcp    0    0 172.23.199.16:179      172.23.199.16:46090       ESTABLISHED 8918/bird
tcp    0    0 172.23.199.17:170      172.23.199.17:49770       ESTABLISHED 8918/bird

我部署的calico由于集群规模不是很大,使用的是calico的bgp模式的node-to-node-mesh全节点互联,这种模式在小规模集群里面还可以用。在3.4.0版本的calico是可以支持到100多个节点。

标签:node,插件,k8s,name,etcd,flannel,kube,calico,Flannel
From: https://www.cnblogs.com/shanhubei/p/17372041.html

相关文章

  • k8s-外置ETCD集群部署
    如何把ETCD的数据库备份,以及还原的操作方法(待更新中)地址:Etcd是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障。为了节省机器,这里把3个ETCD实例分别部署在一个Matse......
  • kubeadm安装-k8s集群(阿里云服务)【转】-实测安装成功
    部署Kubeadm遇到的哪些问题,并且如何解决地址:http://www.shanhubei.com/archives/2582.htmlk8s集群-CNI网络插件地址:http://www.shanhubei.com/archives/2582.html1、初始化服务器设置(三台都要)环境机器:Linux7.6系统为了方便管理,将服务器的实例名称改成:k8s-master01-15/......
  • k8s kubectl 命令使用及命令补全
      kebuctl命令补全yuminstall-ybash-completionsource/usr/share/bash-completion/bash_completionsource<(kubectlcompletionbash)kubectlcompletionbash>~/.kube/completion.bash.incsource'/root/.kube/completion.bash.inc'source......
  • k8s pod完整生命周期
     [root@master01pod_init]#catpod-all-life-cycles.yamlapiVersion:v1kind:Podmetadata:name:init-pod-1namespace:defaultlabels:app:ini-poddev:prospec:initContainers:-name:init-1image:nginx:1.16.0imagePullPol......
  • k8s Kubernetes Dashboard 安装与使用
    https://github.com/kubernetes/dashboardhttps://developer.aliyun.com/article/745086https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.mdhttps://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/web-ui-......
  • k8s 编写pod yaml 文件 启动pod 查看pod详细信息 查看pod日志 连接pod容器 删除po
    #1创建podyaml文件#使用帮助命令 [root@master01pod]#kubectlexplainpod.spec[root@master01pod]#catpod-self.yamlapiVersion:v1kind:Podmetadata:name:pod-selfnamespace:defaultlabels:app:my-selfdev:prospec:restartPolic......
  • k8s 使用 RBAC 鉴权 建立不同用户使用k8s。只有指定命名空间的权限
    k8s使用RBAC鉴权https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/#创建sa账号kubectlcreatesasa-test-20230408#使用sa账号创建pod资源[root@master01sa]#catpod.yamlapiVersion:v1kind:Podmetadata:name:sa-test-pod-20230408......
  • k8s 控制器-Replicaset-Deployment cordon drain
    k8s控制器-Replicaset-Deployment#cordon警戒线 执行后不会在调度到该节点上了[root@master01deployment]#kubectlcordonnode01node/node01cordoned[root@master01deployment]# NAMESTATUSROLESAGEVERSIONmaster0......
  • k8s labels 创建和删除
    #1lables#nodelabel[root@master01pod]#kubectllabelpodspod-selftime=2023 [root@master01pod]#kubectlgetnode--show-labels [root@master01pod]#kubectllabelnodesnode01host- #2podlabel [root@master01pod]#kubectllabelnodesno......
  • 案例分享-full gc导致k8s pod重启
     在之前的记一次k8spod频繁重启的优化之旅中分享过对于pod频繁重启的一些案例,最近又遇到一例,继续分享出来希望能给大家带来些许收获。问题现象报警群里突然显示某pod频繁重启,我随即上去查看日志,主要分这么几步:  1.查看pod重启的原因,kubectldescirbepodLastState:......