首页 > 其他分享 >搭建k8s集群错误

搭建k8s集群错误

时间:2023-08-07 19:44:24浏览次数:53  
标签:kube kubernetes controller token 集群 io go k8s 搭建

1
etcd

8月 10 14:12:32 k8master-1 etcd[23435]: {"level":"warn","ts":"2022-08-10T14:12:32.069+0800","caller":"rafthttp/http.go:500","msg":"request cluster ID mismatch","local-member-id":"44ec88b2ad8081e","local-member-cluster-id":"ced548654624706f","local-member-server-version":"3.5.0","local-member-server-minimum-cluster-version":"3.0.0","remote-peer-server-name":"1d412b7cdf0f5787","remote-peer-server-version":"3.5.0","remote-peer-server-minimum-cluster-version":"3.0.0","remote-peer-cluster-id":"8c96ad28e090da8f"}

kube-apiserver

E0810 14:15:31.208449   22888 controller.go:223] unable to sync kubernetes service: etcdserver: requested lease not found
E0810 14:15:41.208772   22888 controller.go:223] unable to sync kubernetes service: etcdserver: requested lease not found

排查:

[root@k8master-1 work]#  /app/k8s/bin/etcdctl --cacert=/etc/kubernetes/cert/ca.pem  --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem --endpoints=https://192.168.159.156:2379,https://192.168.159.158:2379,https://192.168.159.159:2379 member list  -w table
+------------------+---------+------------+------------------------------+------------------------------+------------+
|        ID        | STATUS  |    NAME    |          PEER ADDRS          |         CLIENT ADDRS         | IS LEARNER |
+------------------+---------+------------+------------------------------+------------------------------+------------+
|  44ec88b2ad8081e | started | k8master-1 | https://192.168.159.156:2380 |                              |      false |
|  7d173c333430d55 | started | k8worker-2 | https://192.168.159.159:2380 | https://192.168.159.159:2379 |      false |
| 1d412b7cdf0f5787 | started | k8worker-1 | https://192.168.159.158:2380 | https://192.168.159.158:2379 |      false |
+------------------+---------+------------+------------------------------+------------------------------+------------+

[root@k8master-1 work]#  /app/k8s/bin/etcdctl --cacert=/etc/kubernetes/cert/ca.pem  --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem --endpoints=https://192.168.159.156:2379,https://192.168.159.158:2379,https://192.168.159.159:2379 endpoint status  -w table
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.159.156:2379 |  44ec88b2ad8081e |   3.5.0 |  741 kB |      true |      false |        14 |       6986 |               6986 |        |
| https://192.168.159.158:2379 | 1d412b7cdf0f5787 |   3.5.0 |  1.3 MB |      true |      false |        17 |      40171 |              40171 |        |
| https://192.168.159.159:2379 |  7d173c333430d55 |   3.5.0 |  1.3 MB |     false |      false |        17 |      40171 |              40171 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

如果出现 IS LEADER 2个true,检查日志发现: request cluster ID mismatch
需要删除:
/app/k8s/etcd/work/* #
/app/k8s/etcd/wal/* #
再重启服务。

解决方法:

 systemctl stop  etcd.service
 systemctl status  etcd.service
 rm -f /app/k8s/etcd/work/*
 rm -f /app/k8s/etcd/wal/*
 systemctl start  etcd.service

正常日志:

 1070  8月 10 15:04:35 k8worker-2 etcd[56620]: {"level":"info","ts":"2022-08-10T15:04:35.319+0800","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2245}
 1071  8月 10 15:04:35 k8worker-2 etcd[56620]: {"level":"info","ts":"2022-08-10T15:04:35.319+0800","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2245,"took":"63.833µs"}
 1072  8月 10 15:09:35 k8worker-2 etcd[56620]: {"level":"info","ts":"2022-08-10T15:09:35.326+0800","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2247}
 1073  8月 10 15:09:35 k8worker-2 etcd[56620]: {"level":"info","ts":"2022-08-10T15:09:35.327+0800","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2247,"took":"46.555µs"

2
etcd

{"level":"fatal","ts":"2022-08-10T15:03:50.046+0800","caller":"etcdmain/etcd.go:203","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:203\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
8月 10 15:03:50 k8master-1 systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
8月 10 15:03:50 k8master-1 systemd[1]: Failed to start Etcd Server.
8月 10 15:03:50 k8master-1 systemd[1]: Unit etcd.service entered failed state.
8月 10 15:03:50 k8master-1 systemd[1]: etcd.service failed.

其他节点没有启动,其他节点启动即可。

3
kube-controller-manager

8月 10 15:23:01 k8master-1 kube-controller-manager[35641]: unable to load configmap based request-header-client-ca-file: Get "https://192.168.159.156:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": x509: certificate signed by unknown authority

排查过程:

[root@k8master-1 work]# cat /etc/systemd/system/kube-controller-manager.service |grep pem
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/apiserver-key.pem \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
[root@k8master-1 work]# cfssl certinfo -cert /etc/kubernetes/cert/ca.pem
{
  "subject": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "k8s",
      "CMCC",
      "kubernetes"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "k8s",
      "CMCC",
      "kubernetes"
    ]
  },
  "serial_number": "347768600398445090286403346077020712369829431697",
  "not_before": "2022-08-10T02:18:00Z",
  "not_after": "2032-08-07T02:18:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "84:B0:3E:D3:AF:DD:C3:EE:35:34:C0:A9:6D:61:3B:85:3:DA:D7:B5",
  "subject_key_id": "84:B0:3E:D3:AF:DD:C3:EE:35:34:C0:A9:6D:61:3B:85:3:DA:D7:B5",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIDvjCCAqagAwIBAgIUPOp7vueEa4wXYoSOmNcQ/sZ3yZEwDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xETAPBgNVBAgTCHpoZWppYW5nMREwDwYDVQQHEwho\nYW5nemhvdTEMMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTIyMDgxMDAyMTgwMFoXDTMyMDgwNzAyMTgwMFowZTELMAkG\nA1UEBhMCQ04xETAPBgNVBAgTCHpoZWppYW5nMREwDwYDVQQHEwhoYW5nemhvdTEM\nMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRMwEQYDVQQDEwprdWJlcm5ldGVz\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvkkTgWtX73cVk7YQjxUs\nxv+JdYnRlyL4XrWaqPIMTcPHosJzo/bnn1Neg/2s6ThWndyJFW6bS76FPNi/tnsF\ni8DJPkZkl3QVOHOstf7x3NWEmpo+ZhNLo06zds8wBiekSgTdBWtiSrrrHFIDVtga\n0njE2qoQUguB8nRXsTe0M/nk+zxBHEAIhoFV+0VISpBKlyshdqxKrR2C1j4ad22E\nh3g+s/NJT4jKY9aew1fid47O6VaeSLkr4JXota/x64/g+1ZXqOrSpgrjPx/RGvnI\nBKA3BLNGj4wgOwz9FMzde5D2WXaqnSsriVOVH/aUYwM3IbUTd2Xzx6i37F2i25rk\nFQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd\nBgNVHQ4EFgQUhLA+06/dw+41NMCpbWE7hQPa17UwHwYDVR0jBBgwFoAUhLA+06/d\nw+41NMCpbWE7hQPa17UwDQYJKoZIhvcNAQELBQADggEBAE1n1LITDmjbeO3z4J3J\ng+3tJXQiY2MCPy93IeGUKYYOZd+FhaaHQz8Ym6Z5nLdu+dROFy0Pr9IQ8lpZ7N//\ncOZO0J1VTQJFNOkQ7LCgLRl2W5FYT0NWiYwj0Gm60DH5TdOqzSAxJyqXy/SoK9TQ\nriFc14SrtHdtxmnLxcTyoFtEuLBusaBbxqMFvLHIsqC2+lb1YnC0fuiKTtMVW4+b\n2ir7GzO7l60q1wxziLuoBxrOCnFM86i3ef+LOrIp4AMHVLtIv4lGtpcu7CyyNOjj\nusq2Zx9jGd6MZzmd4gUZiyZeu93/31EdZakd+S6QdylMSCx6mKpFO4yFOKSifg3I\npqU=\n-----END CERTIFICATE-----\n"
}

[root@k8master-1 work]# cfssl certinfo -cert /etc/kubernetes/cert/apiserver.pem
{
  "subject": {
    "common_name": "apiserver",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "k8s",
      "CMCC",
      "apiserver"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "hangzhou",
    "names": [
      "CN",
      "hangzhou",
      "hangzhou",
      "k8s",
      "CMCC",
      "kubernetes"
    ]
  },
  "serial_number": "84960477279698964990973585978458344028024167838",
  "sans": [
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local.",
    "127.0.0.1",
    "192.168.159.156"
  ],
  "not_before": "2022-08-04T09:10:00Z",
  "not_after": "2032-08-01T09:10:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "41:33:28:8C:FC:B9:AC:DF:BF:89:B:25:CF:C7:8C:19:13:B4:BC:18",
  "subject_key_id": "2F:CC:E5:2C:FA:DD:FB:36:34:F:CB:40:F:B9:7A:6B:E8:32:82:68",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEczCCA1ugAwIBAgIUDuHCcrawXUH2DStz/Tdl+WP1XZ4wDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xETAPBgNVBAgTCGhhbmd6aG91MREwDwYDVQQHEwho\nYW5nemhvdTEMMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTIyMDgwNDA5MTAwMFoXDTMyMDgwMTA5MTAwMFowZDELMAkG\nA1UEBhMCQ04xETAPBgNVBAgTCHpoZWppYW5nMREwDwYDVQQHEwhoYW5nemhvdTEM\nMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRIwEAYDVQQDEwlhcGlzZXJ2ZXIw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC+E66Mz32qB92Zvb5coWdE\nJwGznV4sZW0p+VF6aMMeXKHgnvztFh2mQNxyup6Wq5WxQgem5KXye7izcoUgC+/c\nBGIjBC8YC2q9O8DacLrq0eUhmmsORnYhpHJ0q2CiXn+VysAlUKhAViVxY5nK5BtG\nTnQ1gQNRw+MqSTONNMVHq7T9l09UVw3zramNZYEnMiN0WyonEQ5MC+3zYIlOe2PZ\n5nVc4QEW9IuzXgDydZTky7Uk6OhlObohcYduBP2yb6J0FdC+r2cEcmQ2BRrtHunl\nbxn+TY63r5lSn+cZsM8r0AjvRnyTHk0VQfHLD49uWZPJscT7RfneGd3rMuz1y67n\nAgMBAAGjggEaMIIBFjAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUH\nAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFC/M5Sz63fs2NA/L\nQA+5emvoMoJoMB8GA1UdIwQYMBaAFEEzKIz8uazfv4kLJc/HjBkTtLwYMIGWBgNV\nHREEgY4wgYuCCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3ViZXJu\nZXRlcy5kZWZhdWx0LnN2Y4Iea3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVy\ngiVrdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwuhwR/AAABhwTA\nqJ+cMA0GCSqGSIb3DQEBCwUAA4IBAQC975QGZqMw32aJTbzdGrGJaiLg5jKFTgHl\nMdAkk5jqCDXFCBt6oIgnP662yswzc0Nn9AJEsF+Eqgg40W4REob4NwYBkOPfQK7T\n3oZahMPAWvG0/dnsr/J7qdZOxXrsMGrStN+qoRwyVEtrHw0tGvTOBZhZycKCN/UO\neXA2szY3Jie1oYpB5Y2zSIHtkWPJHzRqjr6rU2p+aLkrTxEkDBwo/ohku5aGoRmm\nuWsPULcvF/a6EBSkGK2tQ9b4mAmZuuHW6xM7H4PV7rxA+5vujKA+BbQEh1B+a/sW\nRscSDDR4rql+homx0ErJfNAQmIWZ7DBQUQQ378IlkXn2znaAsBvj\n-----END CERTIFICATE-----\n"
}
[root@k8master-1 work]# cfssl certinfo -cert /etc/kubernetes/cert/kube-controller-manager.pem
{
  "subject": {
    "common_name": "system:kube-controller-manager",
    "country": "CN",
    "organization": "system:kube-controller-manager",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "system:kube-controller-manager",
      "CMCC",
      "system:kube-controller-manager"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "hangzhou",
    "names": [
      "CN",
      "hangzhou",
      "hangzhou",
      "k8s",
      "CMCC",
      "kubernetes"
    ]
  },
  "serial_number": "710560358356596706147767323881866756079417115338",
  "sans": [
    "127.0.0.1",
    "192.168.159.156"
  ],
  "not_before": "2022-08-03T07:25:00Z",
  "not_after": "2032-07-31T07:25:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "41:33:28:8C:FC:B9:AC:DF:BF:89:B:25:CF:C7:8C:19:13:B4:BC:18",
  "subject_key_id": "84:E7:1D:76:55:B2:CE:78:A1:DF:74:A7:9F:E8:17:17:74:B:8A:79",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEIDCCAwigAwIBAgIUfHag4eqFd5/HPfLpPU6ydeaVQsowDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xETAPBgNVBAgTCGhhbmd6aG91MREwDwYDVQQHEwho\nYW5nemhvdTEMMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTIyMDgwMzA3MjUwMFoXDTMyMDczMTA3MjUwMFowgZQxCzAJ\nBgNVBAYTAkNOMREwDwYDVQQIEwh6aGVqaWFuZzERMA8GA1UEBxMIaGFuZ3pob3Ux\nJzAlBgNVBAoTHnN5c3RlbTprdWJlLWNvbnRyb2xsZXItbWFuYWdlcjENMAsGA1UE\nCxMEQ01DQzEnMCUGA1UEAxMec3lzdGVtOmt1YmUtY29udHJvbGxlci1tYW5hZ2Vy\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz+NfxK6XegsbIk5wHZyu\npijrK3Q1erx03ioL5T5PNeLsPMf89o2+XdP//IqmTP2Ys1bQD5U+Xwpiw0AeHYc2\nrItIVj3ARZBZHyW8CSw/7wAm2tEeadwQCvg1iSRRYu5hKCwpxqJG63+VT1n6uOds\no2BjxonnSEfpn957a1riBN44bYVcBIO6fefFIdMrRzfrJT+4dTO198tmAHRJN30T\nf4CAnLNtwW8KpafKzDgM0SNRk2CZx/xhdlzq10p1Ef404dBvWmsjKyqPPA1XiJdO\nzXhnuEez5CwXw+P+3GkFbPB6yYUvvK/KBa9U6ZyoBA60+jHMv3izgUKQ4UVzDChT\nNQIDAQABo4GXMIGUMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcD\nAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUhOcddlWyznih33Sn\nn+gXF3QLinkwHwYDVR0jBBgwFoAUQTMojPy5rN+/iQslz8eMGRO0vBgwFQYDVR0R\nBA4wDIcEfwAAAYcEwKifnDANBgkqhkiG9w0BAQsFAAOCAQEA4qdnV2AvQKVswRU0\nVp8HniojGaTNgzuvZCaiKIHMntJ912JwiRtIeCPyaEu0RYgUo/0YtaweRGiiSWv/\nbqaHM+KJcoeZrIpFzLdrP730HsZUM35Tm5p/fdzuFsQEqrAk6c0x5Z+rThkmmIAf\nq8Gck2huBl4a65jEksxW1zXetM5dFc7fSIuto/wPE5/3iJnrE1MfCiOtwOoprYM7\nQfbEo5hHGZ52pk0mvXwakgfFpANoAdsN2FVNVxScjiqcGJnOreHP6LEv6095Bi9F\nq5Ac5N/+05PwwjiYKwpozgDHGMZipE4rvnTH9iCEfO6lxasT9bqWhf5953SKqkAn\nKZSQjw==\n-----END CERTIFICATE-----\n"
}

查看证书发现ca证书和服务证书发行者信息不匹配。重新签发证书。

[root@k8master-1 work]# cfssl certinfo -cert **kube-controller-manager.pem**
{
  "subject": {
    "common_name": "system:kube-controller-manager",
    "country": "CN",
    "organization": "system:kube-controller-manager",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "system:kube-controller-manager",
      "CMCC",
      "system:kube-controller-manager"
    ]
  },
  "issuer": {
    "common_name": "kubernetes",
    "country": "CN",
    "organization": "k8s",
    "organizational_unit": "CMCC",
    "locality": "hangzhou",
    "province": "zhejiang",
    "names": [
      "CN",
      "zhejiang",
      "hangzhou",
      "k8s",
      "CMCC",
      "kubernetes"
    ]
  },
  "serial_number": "599221113647138869284424847635099235022063063206",
  "sans": [
    "127.0.0.1",
    "192.168.159.156"
  ],
  "not_before": "2022-08-10T07:32:00Z",
  "not_after": "2032-08-07T07:32:00Z",
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "84:B0:3E:D3:AF:DD:C3:EE:35:34:C0:A9:6D:61:3B:85:3:DA:D7:B5",
  "subject_key_id": "12:D0:B0:34:CA:A5:9:61:C8:76:A0:D1:4A:A1:AD:3D:32:A8:15:A7",
  "pem": "-----BEGIN CERTIFICATE-----\nMIIEIDCCAwigAwIBAgIUaPYBCPbRK3QJs+ZrQfxvqR/E0KYwDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xETAPBgNVBAgTCHpoZWppYW5nMREwDwYDVQQHEwho\nYW5nemhvdTEMMAoGA1UEChMDazhzMQ0wCwYDVQQLEwRDTUNDMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTIyMDgxMDA3MzIwMFoXDTMyMDgwNzA3MzIwMFowgZQxCzAJ\nBgNVBAYTAkNOMREwDwYDVQQIEwh6aGVqaWFuZzERMA8GA1UEBxMIaGFuZ3pob3Ux\nJzAlBgNVBAoTHnN5c3RlbTprdWJlLWNvbnRyb2xsZXItbWFuYWdlcjENMAsGA1UE\nCxMEQ01DQzEnMCUGA1UEAxMec3lzdGVtOmt1YmUtY29udHJvbGxlci1tYW5hZ2Vy\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvOcvWcbcIJiQDR00vF7z\nbiGaVsIZQO/O4xt/I28wSE/FoYwTVzWR7CrX40sJnQKLOzKv35CMxfC3ISa21W0d\nazzbGeI2wu/ePn7oCohGeoaz0xyKrbv1/JeNL7b9OOBm+aeferoTg48xHXwNBNK0\nYcmckZUk93eH1pKzuctkDMnI4UPZ18L5NZawALOpLbjRVYIcwiEXXeA3hCrV8TEL\nA8LNnwEpDt/CThM8cBfCXeTTqyCMgY3tYTG14Xyi79D+C/z+YXwRtu8Xxhy+yAAM\ncahCjKUswfOu2nV+ctXAQsLT3Tq4NAN1/YQNoIct7EzEragTNs1XCmPsn1bvNDbV\nfQIDAQABo4GXMIGUMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcD\nAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUEtCwNMqlCWHIdqDR\nSqGtPTKoFacwHwYDVR0jBBgwFoAUhLA+06/dw+41NMCpbWE7hQPa17UwFQYDVR0R\nBA4wDIcEfwAAAYcEwKifnDANBgkqhkiG9w0BAQsFAAOCAQEAA7kmV4G9VjumH7Ug\nNhB+SkIZ2wVzX1iIaFf9yQ7HGaxHKuInB72CgLBjCoa7nim3g3s5RmtF3kr/paO8\ntdhP5qPCVzvNnvKK/CktuMSI+iWiZaHg2XAv3HYGO+kxfX7L5OSRRhXhpCD1Yg1/\nx7qF71nBtGzCJuZ1iQlIDC2WfDmQvpoyFjxd3Grt6m5OacyAdQG2m7OwAj/4rrkC\nVfkMXESi0dmUPCPuXvG0UCWv9xU23qMlu/QXmD+FdXh+BxJdkDSI6dNsQowgmhhQ\n1u+H4paigmlFxB9cqYNJrGVmarEhrRQUwh6mJ/xvia1fLF2vmWhl9wrOW6e83U7z\n59aZSQ==\n-----END CERTIFICATE-----\n"
}
# 还需要检测 cat /etc/kubernetes/kube-controller-manager.kubeconfig
cat /etc/kubernetes/kube-controller-manager.kubeconfig	
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVJRENDQXdpZ0F3SUJBZ0lVZkhhZzRlcUZkNS9IUGZMcFBVNnlkZWFWUXNvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0doaGJtZDZhRzkxTVJFd0R3WURWUVFIRXdobwpZVzVuZW1odmRURU1NQW9HQTFVRUNoTURhemh6TVEwd0N3WURWUVFMRXdSRFRVTkRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl5TURnd016QTNNalV3TUZvWERUTXlNRGN6TVRBM01qVXdNRm93Z1pReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoNmFHVnFhV0Z1WnpFUk1BOEdBMVVFQnhNSWFHRnVaM3BvYjNVeApKekFsQmdOVkJBb1RIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pFTk1Bc0dBMVVFCkN4TUVRMDFEUXpFbk1DVUdBMVVFQXhNZWMzbHpkR1Z0T210MVltVXRZMjl1ZEhKdmJHeGxjaTF0WVc1aFoyVnkKTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF6K05meEs2WGVnc2JJazV3SFp5dQpwaWpySzNRMWVyeDAzaW9MNVQ1UE5lTHNQTWY4OW8yK1hkUC8vSXFtVFAyWXMxYlFENVUrWHdwaXcwQWVIWWMyCnJJdElWajNBUlpCWkh5VzhDU3cvN3dBbTJ0RWVhZHdRQ3ZnMWlTUlJZdTVoS0N3cHhxSkc2MytWVDFuNnVPZHMKbzJCanhvbm5TRWZwbjk1N2ExcmlCTjQ0YllWY0JJTzZmZWZGSWRNclJ6ZnJKVCs0ZFRPMTk4dG1BSFJKTjMwVApmNENBbkxOdHdXOEtwYWZLekRnTTBTTlJrMkNaeC94aGRsenExMHAxRWY0MDRkQnZXbXNqS3lxUFBBMVhpSmRPCnpYaG51RWV6NUN3WHcrUCszR2tGYlBCNnlZVXZ2Sy9LQmE5VTZaeW9CQTYwK2pITXYzaXpnVUtRNFVWekRDaFQKTlFJREFRQUJvNEdYTUlHVU1BNEdBMVVkRHdFQi93UUVBd0lGb0RBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjRApBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVVoT2NkZGxXeXpuaWgzM1NuCm4rZ1hGM1FMaW5rd0h3WURWUjBqQkJnd0ZvQVVRVE1valB5NXJOKy9pUXNsejhlTUdSTzB2Qmd3RlFZRFZSMFIKQkE0d0RJY0Vmd0FBQVljRXdLaWZuREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBNHFkblYyQXZRS1Zzd1JVMApWcDhIbmlvakdhVE5nenV2WkNhaUtJSE1udEo5MTJKd2lSdEllQ1B5YUV1MFJZZ1VvLzBZdGF3ZVJHaWlTV3YvCmJxYUhNK0tKY29lWnJJcEZ6TGRyUDczMEhzWlVNMzVUbTVwL2ZkenVGc1FFcXJBazZjMHg1WityVGhrbW1JQWYKcThHY2syaHVCbDRhNjVqRWtzeFcxelhldE01ZEZjN2ZTSXV0by93UEU1LzNpSm5yRTFNZkNpT3R3T29wcllNNwpRZmJFbzVoSEdaNTJwazBtdlh3YWtnZkZwQU5vQWRzTjJGVk5WeFNjamlxY0dKbk9yZUhQNkxFdjYwOTVCaTlGCnE1QWM1Ti8rMDVQd3dqaVlLd3BvemdESEdNWmlwRTRydm5USDlpQ0VmTzZseGFzVDlicVdoZjU5NTNTS3FrQW4KS1pTUWp3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" |base64 -d >/tmp/1.pem

# 也是不一致的
[root@k8master-1 work]# openssl x509 -in /tmp/1.pem  -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            7c:76:a0:e1:ea:85:77:9f:c7:3d:f2:e9:3d:4e:b2:75:e6:95:42:ca
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=hangzhou, L=hangzhou, O=k8s, OU=CMCC, CN=kubernetes
        Validity
            Not Before: Aug  3 07:25:00 2022 GMT
            Not After : Jul 31 07:25:00 2032 GMT
        Subject: C=CN, ST=zhejiang, L=hangzhou, O=system:kube-controller-manager, OU=CMCC, CN=system:kube-controller-manager
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:cf:e3:5f:c4:ae:97:7a:0b:1b:22:4e:70:1d:9c:
                    ae:a6:28:eb:2b:74:35:7a:bc:74:de:2a:0b:e5:3e:
                    4f:35:e2:ec:3c:c7:fc:f6:8d:be:5d:d3:ff:fc:8a:
                    a6:4c:fd:98:b3:56:d0:0f:95:3e:5f:0a:62:c3:40:
                    1e:1d:87:36:ac:8b:48:56:3d:c0:45:90:59:1f:25:
                    bc:09:2c:3f:ef:00:26:da:d1:1e:69:dc:10:0a:f8:
                    35:89:24:51:62:ee:61:28:2c:29:c6:a2:46:eb:7f:
                    95:4f:59:fa:b8:e7:6c:a3:60:63:c6:89:e7:48:47:
                    e9:9f:de:7b:6b:5a:e2:04:de:38:6d:85:5c:04:83:
                    ba:7d:e7:c5:21:d3:2b:47:37:eb:25:3f:b8:75:33:
                    b5:f7:cb:66:00:74:49:37:7d:13:7f:80:80:9c:b3:
                    6d:c1:6f:0a:a5:a7:ca:cc:38:0c:d1:23:51:93:60:
                    99:c7:fc:61:76:5c:ea:d7:4a:75:11:fe:34:e1:d0:
                    6f:5a:6b:23:2b:2a:8f:3c:0d:57:88:97:4e:cd:78:
                    67:b8:47:b3:e4:2c:17:c3:e3:fe:dc:69:05:6c:f0:
                    7a:c9:85:2f:bc:af:ca:05:af:54:e9:9c:a8:04:0e:
                    b4:fa:31:cc:bf:78:b3:81:42:90:e1:45:73:0c:28:
                    53:35
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                84:E7:1D:76:55:B2:CE:78:A1:DF:74:A7:9F:E8:17:17:74:0B:8A:79
            X509v3 Authority Key Identifier: 
                keyid:41:33:28:8C:FC:B9:AC:DF:BF:89:0B:25:CF:C7:8C:19:13:B4:BC:18

            X509v3 Subject Alternative Name: 
                IP Address:127.0.0.1, IP Address:192.168.159.156
    Signature Algorithm: sha256WithRSAEncryption
         e2:a7:67:57:60:2f:40:a5:6c:c1:15:34:56:9f:07:9e:2a:23:
         19:a4:cd:83:3b:af:64:26:a2:28:81:cc:9e:d2:7d:d7:62:70:
         89:1b:48:78:23:f2:68:4b:b4:45:88:14:a3:fd:18:b5:ac:1e:
         44:68:a2:49:6b:ff:6e:a6:87:33:e2:89:72:87:99:ac:8a:45:
         cc:b7:6b:3f:bd:f4:1e:c6:54:33:7e:53:9b:9a:7f:7d:dc:ee:
         16:c4:04:aa:b0:24:e9:cd:31:e5:9f:ab:4e:19:26:98:80:1f:
         ab:c1:9c:93:68:6e:06:5e:1a:eb:98:c4:92:cc:56:d7:35:de:
         b4:ce:5d:15:ce:df:48:8b:ad:a3:fc:0f:13:9f:f7:88:99:eb:
         13:53:1f:0a:23:ad:c0:ea:29:ad:83:3b:41:f6:c4:a3:98:47:
         19:9e:76:a6:4d:26:bd:7c:1a:92:07:c5:a4:03:68:01:db:0d:
         d8:55:4d:57:14:9c:8e:2a:9c:18:99:ce:ad:e1:cf:e8:b1:2f:
         eb:4f:79:06:2f:45:ab:90:1c:e4:df:fe:d3:93:f0:c2:38:98:
         2b:0a:68:ce:00:c7:18:c6:62:a4:4e:2b:be:74:c7:f6:20:84:
         7c:ee:a5:c5:ab:13:f5:ba:96:85:fe:7d:e7:74:8a:aa:40:27:
         29:94:90:8f


[root@k8master-1 work]# openssl x509 -in ./2.pem  -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            68:f6:01:08:f6:d1:2b:74:09:b3:e6:6b:41:fc:6f:a9:1f:c4:d0:a6
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=zhejiang, L=hangzhou, O=k8s, OU=CMCC, CN=kubernetes

kube-scheduler 类似错误也可以排查

8月 10 16:55:33 k8master-1 kube-scheduler[20154]: E0810 16:55:33.369411   20154 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.159.156:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": x509: certificate signed by unknown authority

4
flanneld

8月 10 17:16:52 k8worker-1 systemd[1]: Starting Flanneld...
8月 10 17:16:52 k8worker-1 flanneld[73178]: I0810 17:16:52.593301   73178 main.go:533] Using interface with name ens160 and address 192.168.159.158
8月 10 17:16:52 k8worker-1 flanneld[73178]: I0810 17:16:52.593355   73178 main.go:550] Defaulting external address to interface address (192.168.159.158)
8月 10 17:16:52 k8worker-1 flanneld[73178]: E0810 17:16:52.594511   73178 main.go:251] Failed to create SubnetManager: env variables POD_NAME and POD_NAMESPACE must be set
8月 10 17:16:52 k8worker-1 systemd[1]: flanneld.service: main process exited, code=exited, status=1/FAILURE
8月 10 17:16:52 k8worker-1 systemd[1]: Failed to start Flanneld.
8月 10 17:16:52 k8worker-1 systemd[1]: Unit flanneld.service entered failed state.
8月 10 17:16:52 k8worker-1 systemd[1]: flanneld.service failed.

经排查,配置文件不正确。

[root@k8worker-1 work]# cat /etc/kubernetes/flannel.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3RENDQXFpZ0F3SUJBZ0lVSzRza2pwZTQ5dnFKWllDMUpKeGJUakFBc3I0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0doaGJtZDZhRzkxTVJFd0R3WURWUVFIRXdobwpZVzVuZW1odmRURU1NQW9HQTFVRUNoTURhemh6TVEwd0N3WURWUVFMRXdSRFRVTkRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1DQVhEVEl5TURnd016QTNNRFV3TUZvWUR6SXhNakl3TnpFd01EY3dOVEF3V2pCbE1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSWFHRnVaM3BvYjNVeEVUQVBCZ05WQkFjVENHaGhibWQ2YUc5MQpNUXd3Q2dZRFZRUUtFd05yT0hNeERUQUxCZ05WQkFzVEJFTk5RME14RXpBUkJnTlZCQU1UQ210MVltVnlibVYwClpYTXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEbXlua09rVys4QVBXWVdJMEUKSXZVcnJONjNJb1ArZnl6NFc3aE40ODM5eTdRU3FvUkJBamlVcEZ1WFlYRERIaWR5Qk14MldoS1hHb3A0NnBQdgpESDZtYjZyZ2dCTFIzRDR5NGhqU2hBRU1kbVZKeVBoQ0tyWkRIRmlsMVlxdVQyR25pMVNFVjVEWkFtb3YvUm81ClZUVnpoeXB4dC9IM2EwUFd6a3pNVmdwODhjVnh2eFBsaU1SVkZFRlIySGxDOXQ2R3JTbmltWW5wK2dDVEI2UFcKRXFUMTVYLzBHT3o0OEhRQ2YrNUFHazRXZEFNVmVVQkVPdk1naEpneHNPdmN3ZkJSMlE4NHpIMHFjRGZJTk5nMQpqTytNQk5KN2JLY081U21XbmI4L0F5MmU4em9RcXFYKzVBcUtuanVxWkdPUFhvdFBta3pZWlZRVHZSUTM3OWdGClBkSGJBZ01CQUFHalpqQmtNQTRHQTFVZER3RUIvd1FFQXdJQkJqQVNCZ05WSFJNQkFmOEVDREFHQVFIL0FnRUMKTUIwR0ExVWREZ1FXQkJSQk15aU0vTG1zMzcrSkN5WFB4NHdaRTdTOEdEQWZCZ05WSFNNRUdEQVdnQlJCTXlpTQovTG1zMzcrSkN5WFB4NHdaRTdTOEdEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFON0puZzF4RkNkVzFtZ2NCCmNBUXVLRnFPNlNaZXlHUmdhbFlLY3g1RkZDWmtYV21OSU1LRGZDamhuN3dWWVFuRWZDRHRLZHRTNTBSUGt0aXYKRERPeEhrekNlL2lEN1BNb2FHYUt6USt1cTRaMXE5UStPVkxwZ0E1WWQ3UDFGZ3Y3N2ZSN2ZjT2VuU29LTnJ2NAp0TkhDdGJvZmhyeEdJaTF0VGNQazFFcDhYcHdKRmd4bWxkWEx3VHBIeWlENWoxcjg1L3hmTVFyNnRkOWdtWjEyCjBEbEgvNnlteEk2cmhqLzJFVy9VUURza1NsVVI0VjBlUnpBcWN4OE83ajFZQjJKYVRIUmZLMW9OdmhEYWdZTkQKS2IzeVYxTEg3MzQzS3ZKREZJNGFMT1RFWWg1Qks3ckZEdHFvR3FuaGUvVDhZakZPM3JRZVFzYjFYVDlhMEw5VwplMlRTYUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.3.140:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: flannel
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: flannel
  user: {}

[root@k8worker-1 ~]# echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3RENDQXFpZ0F3SUJBZ0lVSzRza2pwZTQ5dnFKWllDMUpKeGJUakFBc3I0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0doaGJtZDZhRzkxTVJFd0R3WURWUVFIRXdobwpZVzVuZW1odmRURU1NQW9HQTFVRUNoTURhemh6TVEwd0N3WURWUVFMRXdSRFRVTkRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1DQVhEVEl5TURnd016QTNNRFV3TUZvWUR6SXhNakl3TnpFd01EY3dOVEF3V2pCbE1Rc3cKQ1FZRFZRUUdFd0pEVGpFUk1BOEdBMVVFQ0JNSWFHRnVaM3BvYjNVeEVUQVBCZ05WQkFjVENHaGhibWQ2YUc5MQpNUXd3Q2dZRFZRUUtFd05yT0hNeERUQUxCZ05WQkFzVEJFTk5RME14RXpBUkJnTlZCQU1UQ210MVltVnlibVYwClpYTXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEbXlua09rVys4QVBXWVdJMEUKSXZVcnJONjNJb1ArZnl6NFc3aE40ODM5eTdRU3FvUkJBamlVcEZ1WFlYRERIaWR5Qk14MldoS1hHb3A0NnBQdgpESDZtYjZyZ2dCTFIzRDR5NGhqU2hBRU1kbVZKeVBoQ0tyWkRIRmlsMVlxdVQyR25pMVNFVjVEWkFtb3YvUm81ClZUVnpoeXB4dC9IM2EwUFd6a3pNVmdwODhjVnh2eFBsaU1SVkZFRlIySGxDOXQ2R3JTbmltWW5wK2dDVEI2UFcKRXFUMTVYLzBHT3o0OEhRQ2YrNUFHazRXZEFNVmVVQkVPdk1naEpneHNPdmN3ZkJSMlE4NHpIMHFjRGZJTk5nMQpqTytNQk5KN2JLY081U21XbmI4L0F5MmU4em9RcXFYKzVBcUtuanVxWkdPUFhvdFBta3pZWlZRVHZSUTM3OWdGClBkSGJBZ01CQUFHalpqQmtNQTRHQTFVZER3RUIvd1FFQXdJQkJqQVNCZ05WSFJNQkFmOEVDREFHQVFIL0FnRUMKTUIwR0ExVWREZ1FXQkJSQk15aU0vTG1zMzcrSkN5WFB4NHdaRTdTOEdEQWZCZ05WSFNNRUdEQVdnQlJCTXlpTQovTG1zMzcrSkN5WFB4NHdaRTdTOEdEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFON0puZzF4RkNkVzFtZ2NCCmNBUXVLRnFPNlNaZXlHUmdhbFlLY3g1RkZDWmtYV21OSU1LRGZDamhuN3dWWVFuRWZDRHRLZHRTNTBSUGt0aXYKRERPeEhrekNlL2lEN1BNb2FHYUt6USt1cTRaMXE5UStPVkxwZ0E1WWQ3UDFGZ3Y3N2ZSN2ZjT2VuU29LTnJ2NAp0TkhDdGJvZmhyeEdJaTF0VGNQazFFcDhYcHdKRmd4bWxkWEx3VHBIeWlENWoxcjg1L3hmTVFyNnRkOWdtWjEyCjBEbEgvNnlteEk2cmhqLzJFVy9VUURza1NsVVI0VjBlUnpBcWN4OE83ajFZQjJKYVRIUmZLMW9OdmhEYWdZTkQKS2IzeVYxTEg3MzQzS3ZKREZJNGFMT1RFWWg1Qks3ckZEdHFvR3FuaGUvVDhZakZPM3JRZVFzYjFYVDlhMEw5VwplMlRTYUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" |base64 -d >/tmp/5.pem
[root@k8worker-1 ~]# openssl x509 -in /tmp/5.pem  -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            2b:8b:24:8e:97:b8:f6:fa:89:65:80:b5:24:9c:5b:4e:30:00:b2:be
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CN, ST=hangzhou, L=hangzhou, O=k8s, OU=CMCC, CN=kubernetes
        Validity
            Not Before: Aug  3 07:05:00 2022 GMT
            Not After : Jul 10 07:05:00 2122 GMT
        Subject: C=CN, ST=hangzhou, L=hangzhou, O=k8s, OU=CMCC, CN=kubernetes

# 发现证书签发有误,重新签发证书

5
flanneld

8月 11 08:54:50 k8worker-1 flanneld[80055]: I0811 08:54:50.606001   80055 kube.go:299] Starting kube subnet manager
8月 11 08:54:50 k8worker-1 flanneld[80055]: E0811 08:54:50.634406   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
8月 11 08:54:51 k8worker-1 flanneld[80055]: E0811 08:54:51.924514   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
8月 11 08:54:55 k8worker-1 flanneld[80055]: E0811 08:54:55.034559   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
8月 11 08:55:00 k8worker-1 flanneld[80055]: E0811 08:55:00.367154   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
8月 11 08:55:09 k8worker-1 flanneld[80055]: E0811 08:55:09.575133   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
8月 11 08:55:20 k8worker-1 etcd[56499]: {"level":"warn","ts":"2022-08-11T08:55:20.810+0800","caller":"rafthttp/probing_status.go:82","msg":"prober found high clock drift","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7d173c333430d55","clock-drift":"1.390835196s","rtt":"1.165592ms"}
8月 11 08:55:27 k8worker-1 flanneld[80055]: E0811 08:55:27.816591   80055 reflector.go:127] github.com/flannel-io/flannel/subnet/kube/kube.go:300: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope

排查:

[root@k8worker-1 ~]# cat /etc/kubernetes/flannel.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVUE9wN3Z1ZUVhNHdYWW9TT21OY1Evc1ozeVpFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0hwb1pXcHBZVzVuTVJFd0R3WURWUVFIRXdobwpZVzVuZW1odmRURU1NQW9HQTFVRUNoTURhemh6TVEwd0N3WURWUVFMRXdSRFRVTkRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl5TURneE1EQXlNVGd3TUZvWERUTXlNRGd3TnpBeU1UZ3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0hwb1pXcHBZVzVuTVJFd0R3WURWUVFIRXdob1lXNW5lbWh2ZFRFTQpNQW9HQTFVRUNoTURhemh6TVEwd0N3WURWUVFMRXdSRFRVTkRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdmtrVGdXdFg3M2NWazdZUWp4VXMKeHYrSmRZblJseUw0WHJXYXFQSU1UY1BIb3NKem8vYm5uMU5lZy8yczZUaFduZHlKRlc2YlM3NkZQTmkvdG5zRgppOERKUGtaa2wzUVZPSE9zdGY3eDNOV0VtcG8rWmhOTG8wNnpkczh3Qmlla1NnVGRCV3RpU3JyckhGSURWdGdhCjBuakUycW9RVWd1QjhuUlhzVGUwTS9uayt6eEJIRUFJaG9GViswVklTcEJLbHlzaGRxeEtyUjJDMWo0YWQyMkUKaDNnK3MvTkpUNGpLWTlhZXcxZmlkNDdPNlZhZVNMa3I0SlhvdGEveDY0L2crMVpYcU9yU3BncmpQeC9SR3ZuSQpCS0EzQkxOR2o0d2dPd3o5Rk16ZGU1RDJXWGFxblNzcmlWT1ZIL2FVWXdNM0liVVRkMlh6eDZpMzdGMmkyNXJrCkZRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVaExBKzA2L2R3KzQxTk1DcGJXRTdoUVBhMTdVd0h3WURWUjBqQkJnd0ZvQVVoTEErMDYvZAp3KzQxTk1DcGJXRTdoUVBhMTdVd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFMW4xTElURG1qYmVPM3o0SjNKCmcrM3RKWFFpWTJNQ1B5OTNJZUdVS1lZT1pkK0ZoYWFIUXo4WW02WjVuTGR1K2RST0Z5MFByOUlROGxwWjdOLy8KY09aTzBKMVZUUUpGTk9rUTdMQ2dMUmwyVzVGWVQwTldpWXdqMEdtNjBESDVUZE9xelNBeEp5cVh5L1NvSzlUUQpyaUZjMTRTcnRIZHR4bW5MeGNUeW9GdEV1TEJ1c2FCYnhxTUZ2TEhJc3FDMitsYjFZbkMwZnVpS1R0TVZXNCtiCjJpcjdHek83bDYwcTF3eHppTHVvQnhyT0NuRk04NmkzZWYrTE9ySXA0QU1IVkx0SXY0bEd0cGN1N0N5eU5PamoKdXNxMlp4OWpHZDZNWnptZDRnVVppeVpldTkzLzMxRWRaYWtkK1M2UWR5bE1TQ3g2bUtwRk80eUZPS1NpZmczSQpwcVU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.159.156:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: flannel
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: flannel
  user: {}


没有发现用户信息。

[root@k8master-1 work]# kubectl get sa -A
NAMESPACE         NAME                                 SECRETS   AGE
default           default                              3         18h
kube-node-lease   default                              1         18h
kube-public       default                              2         18h
kube-system       attachdetach-controller              1         22h
kube-system       bootstrap-signer                     1         22h
kube-system       certificate-controller               1         23h
kube-system       clusterrole-aggregation-controller   1         22h
kube-system       cronjob-controller                   1         22h
kube-system       daemon-set-controller                1         22h
kube-system       default                              1         18h
kube-system       deployment-controller                1         23h
kube-system       disruption-controller                2         22h
kube-system       endpoint-controller                  1         22h
kube-system       endpointslice-controller             1         22h
kube-system       endpointslicemirroring-controller    1         22h
kube-system       ephemeral-volume-controller          1         22h
kube-system       expand-controller                    1         22h
kube-system       flannel                              0         13m

发现sa flannel 绑定的SECRETS为零

[root@k8master-1 work]# kubectl describe serviceaccounts flannel  -n kube-system 
Name:                flannel
Namespace:           kube-system
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

[root@k8master-1 work]# kubectl get secrets   -n kube-system 
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-x5htx              kubernetes.io/service-account-token   3      23h
bootstrap-signer-token-lb466                     kubernetes.io/service-account-token   3      23h
certificate-controller-token-6ln5x               kubernetes.io/service-account-token   3      24h
clusterrole-aggregation-controller-token-q5hl4   kubernetes.io/service-account-token   3      23h
cronjob-controller-token-kvwqx                   kubernetes.io/service-account-token   3      23h
daemon-set-controller-token-ljcbh                kubernetes.io/service-account-token   3      23h
default-token-4gmqk                              kubernetes.io/service-account-token   3      7h59m
deployment-controller-token-t7jlg                kubernetes.io/service-account-token   3      24h
disruption-controller-token-pxmc4                kubernetes.io/service-account-token   3      23h
endpoint-controller-token-wldr9                  kubernetes.io/service-account-token   3      23h
endpointslice-controller-token-cs6km             kubernetes.io/service-account-token   3      23h
endpointslicemirroring-controller-token-7v6wp    kubernetes.io/service-account-token   3      23h
ephemeral-volume-controller-token-s9rsb          kubernetes.io/service-account-token   3      23h
expand-controller-token-88v4q                    kubernetes.io/service-account-token   3      23h
generic-garbage-collector-token-vnqk9            kubernetes.io/service-account-token   3      23h
horizontal-pod-autoscaler-token-4cjjx            kubernetes.io/service-account-token   3      23h
job-controller-token-784rk                       kubernetes.io/service-account-token   3      23h
namespace-controller-token-r5xt8                 kubernetes.io/service-account-token   3      23h
node-controller-token-kscs6                      kubernetes.io/service-account-token   3      24h
persistent-volume-binder-token-6q4q8             kubernetes.io/service-account-token   3      22h
pod-garbage-collector-token-qlmbv                kubernetes.io/service-account-token   3      23h
pv-protection-controller-token-9wzrz             kubernetes.io/service-account-token   3      23h
pvc-protection-controller-token-rshqf            kubernetes.io/service-account-token   3      23h
replicaset-controller-token-99r45                kubernetes.io/service-account-token   3      24h
replication-controller-token-p5sjt               kubernetes.io/service-account-token   3      23h
resourcequota-controller-token-9bcr4             kubernetes.io/service-account-token   3      23h
root-ca-cert-publisher-token-ccfqs               kubernetes.io/service-account-token   3      23h
service-account-controller-token-h69fk           kubernetes.io/service-account-token   3      23h
service-controller-token-rhd9x                   kubernetes.io/service-account-token   3      23h
statefulset-controller-token-lz5b8               kubernetes.io/service-account-token   3      23h
token-cleaner-token-r5gdz                        kubernetes.io/service-account-token   3      23h
ttl-after-finished-controller-token-6jkw2        kubernetes.io/service-account-token   3      23h
ttl-controller-token-8l2xk                       kubernetes.io/service-account-token   3      24h


6

8月 11 09:26:08 k8master-1 kube-scheduler[42847]: E0811 09:26:08.787316   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:12 k8master-1 kube-scheduler[42847]: E0811 09:26:12.800858   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:22 k8master-1 kube-scheduler[42847]: E0811 09:26:22.852936   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:24 k8master-1 kube-scheduler[42847]: E0811 09:26:24.852133   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:30 k8master-1 kube-scheduler[42847]: E0811 09:26:30.870866   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:42 k8master-1 kube-scheduler[42847]: E0811 09:26:42.921730   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:48 k8master-1 kube-scheduler[42847]: E0811 09:26:48.940275   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:26:50 k8master-1 kube-scheduler[42847]: E0811 09:26:50.941090   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:27:00 k8master-1 kube-scheduler[42847]: E0811 09:27:00.981691   42847 leaderelection.go:367] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again
8月 11 09:27:01 k8master-1 etcd[34352]: {"level":"warn","ts":"2022-08-11T09:27:01.096+0800","caller":"etcdserver/util.go:123","msg":"failed to apply request","took":"5.795µs","request":"header:<ID:960817609993167261 username:\"etcd\" auth_revision:1 > compaction:<revision:9102 > ","response":"","error":"mvcc: required revision is a future revision"}

kube-apiserver

8月 11 10:47:30 k8master-1 kube-apiserver[22888]: E0811 10:47:30.312728   22888 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, square/go-jose: error in cryptographic primitive]"
8月 11 10:47:35 k8master-1 kube-apiserver[22888]: E0811 10:47:35.641102   22888 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, square/go-jose: error in cryptographic primitive]"
8月 11 10:47:42 k8master-1 kube-apiserver[22888]: E0811 10:47:42.783086   22888 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, square/go-jose: error in cryptographic primitive]"

7

8月 11 11:24:11 k8worker-1 kubelet[93253]: E0811 11:24:11.622966   93253 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
8月 11 11:24:11 k8worker-1 kubelet[93253]: E0811 11:24:11.637144   93253 kubelet.go:2407] "Error getting node" err="node \"k8worker-1\" not found"
8月 11 11:24:11 k8worker-1 kubelet[93253]: E0811 11:24:11.679321   93253 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "k8worker-1" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
--
8月 11 11:24:20 k8worker-1 kubelet[93253]: E0811 11:24:20.234789   93253 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope


标签:kube,kubernetes,controller,token,集群,io,go,k8s,搭建
From: https://www.cnblogs.com/superingXin/p/17612543.html

相关文章

  • ftp 方式搭建云仓库
    环境配置服务端:20.0.0.100客户端:20.0.0.1011.服务端安装vsftp服务 2.服务端切换到ftp目录下新建一个centos7目录|other为扩展目录 3.服务端将光驱里的文件包全部拷到centos7里面(已将sr0挂载到/mnt下)4.使用createrepo命令建立仓库数据文件 5.开启FTP服务(关......
  • Nginx 搭建静态文件服务器
    Nginx安装yuminstall-ynginx配置文件红帽Linux配置文件位置:/etc/nginx/nginx.confdocker配置文件位置:/etc/nginx/conf.d/default.conf修改配置文件server{listen80;listen[::]:80;server_namelocalhost;root/files;......
  • HTTP 方式搭建 yum仓库
    环境:服务端:20.0.0.100客户端:20.0.0.101关闭防火墙,挂载镜像文件 到客户端创建仓库文件 清除yum缓存,建立元数据库......
  • 使用 RKE 方式搭建 K8s 集群并部署 NebulaGraph
    本文由社区用户Albert贡献,首发于NebulaGraph论坛,旨在提供多一种的部署方式使用NebulaGraph。在本文,我将会详细地记录下我用K8s部署分布式图数据库NebulaGraph的过程。下面是本次实践的内容规划:一到十章节为K8s集群搭建过程;十一到十五章节为参考NebulaGraph官......
  • docker-compose快速部署elasticsearch-8.8.1集群+kibana+logstash
    安装环境centos7.98cpu16G内存vda50Gvdb100G如果您的环境是Linux,注意要做以下操作,否则es可能会启动失败用编辑工具打开文件/etc/sysctl.conf在尾部添加一行配置vm.max_map_count=262144,如果已存在就修改,数值不能低于262144修改保存,然后执行命令sudosysctl-p使其立即......
  • 搭建FAQ文档的这些好处!看到就有福了!
    现在在很多企业的官方网站上都可以看到FAQ文档,就是列出了一些用户常见的问题,来帮助用户更好去了解企业产品。用户在面对一些产品的使用时可能会遇到一些看起来很简单,但是不经过说明很难搞清楚的问题,这个时候就很需要FAQ的帮助了。搭建FAQ文档的原因1.减少客户的沮丧感:在互联网时代,......
  • Redis从入门到放弃(9):集群模式
    前面文章我们介绍了Redis的主从模式是一种在Redis中实现高可用性的方式,但也存在一些缺点。1、主从模式缺点写入单点故障:在主从模式中,写入操作只能在主节点进行,如果主节点宕机,写入将无法执行。虽然可以通过升级从节点为主节点来解决,但这会增加故障切换的复杂性。写入压力分......
  • VSCode+XMake开发环境搭建备忘
    1、安装VSCode、XMake。 2、在VSCode插件商店中安装C/C++和XMake插件。  3、创建工程,在指定文件夹目录下运行xmakecreate命令。 4、编译,在xmake.lua目录运行xmake命令。 5、运行与调试,xmakerun-d。 ......
  • 10亿数据、查询<10s,论基于OLAP搭建广告系统的正确姿势
    更多技术交流、求职机会,欢迎关注字节跳动数据平台微信公众号,回复【1】进入官方交流群由于流量红利逐渐消退,越来越多的广告企业和从业者开始探索精细化营销的新路径,取代以往的全流量、粗放式的广告轰炸。精细化营销意味着要在数以亿计的人群中优选出那些最具潜力的目标受众,这无疑对......
  • 10亿数据、查询<10s,论基于OLAP搭建广告系统的正确姿势
    更多技术交流、求职机会,欢迎关注字节跳动数据平台微信公众号,回复【1】进入官方交流群 由于流量红利逐渐消退,越来越多的广告企业和从业者开始探索精细化营销的新路径,取代以往的全流量、粗放式的广告轰炸。精细化营销意味着要在数以亿计的人群中优选出那些最具潜力的目标受......