首页 > 其他分享 >kubernetes+calico+dashboard+kuboard

kubernetes+calico+dashboard+kuboard

时间:2022-11-05 23:24:26浏览次数:46  
标签:kuboard 212 kubernetes master dashboard k8s calico kube

 

1、环境准备

主机名 IP地址 系统版本
k8s-master-212 kubeapi.wang.org api.wang.org 192.168.100.212 Ubuntu2004
k8s-master-213 192.168.100.213 Ubuntu2004
k8s-master-214 192.168.100.214 Ubuntu2004
k8s-node-215 192.168.100.215 Ubuntu2004
k8s-node-216 192.168.100.216 Ubuntu2004
k8s-node-217 192.168.100.217 Ubuntu2004
1-1、关闭防火墙
 ufw disable
 ufw status
1-2、时间同步
 apt install -y chrony
 systemctl restart chrony
 systemctl status chrony
1-3、主机名互相解析
 vim /etc/hosts
 ​
 192.168.100.212 k8s-master-212 kubeapi.wang.org api.wang.org
 192.168.100.213 k8s-master-213
 192.168.100.214 k8s-master-214
 192.168.100.215 k8s-node-215
 192.168.100.216 k8s-node-216
 192.168.100.217 k8s-node-217
 ​
 cat /etc/hosts
1-4、禁用swap
 sed -r -i '/\/swap/s@^@#@' /etc/fstab
 swapoff -a
 systemctl --type swap

2、安装docker

 #所有节点执行:
 [root@k8s-master-212 ~]#apt -y install apt-transport-https ca-certificates curl software-properties-common
 [root@k8s-master-212 ~]#curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
 [root@k8s-master-212 ~]#add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
 [root@k8s-master-212 ~]#apt update
 [root@k8s-master-212 ~]#apt install -y docker-ce
 [root@k8s-master-212 ~]#mkdir /etc/docker
 [root@k8s-master-212 ~]#vim /etc/docker/daemon.json
 {
     "registry-mirrors": [
         "https://docker.mirrors.ustc.edu.cn",
         "https://hub-mirror.c.163.com",
         "https://reg-mirror.qiniu.com",
         "https://registry.docker-cn.com"
 ],
     "exec-opts": ["native.cgroupdriver=systemd"],
     "log-driver": "json-file",
     "log-opts": {
     "max-size": "200m"
 },
     "storage-driver": "overlay2"
 }
 ​
 [root@k8s-master-212 ~]#systemctl daemon-reload
 [root@k8s-master-212 ~]#systemctl start docker
 [root@k8s-master-212 ~]#systemctl enable docker
 [root@k8s-master-212 ~]#docker version

3、安装cri-dockerd

 #所有节点执行:
 #下载地址:https://github.com/Mirantis/cri-dockerd
 [root@k8s-master-212 ~]#apt install ./cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb -y
 [root@k8s-master-212 ~]#systemctl status cri-docker.service

4、安装kubelet kubeadm kubectl

 #所有节点执行:
 [root@k8s-master-212 ~]#apt install -y apt-transport-https curl
 [root@k8s-master-212 ~]#curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
 [root@k8s-master-212 ~]#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
 EOF
 ​
 [root@k8s-master-212 ~]#apt update
 [root@k8s-master-212 ~]#apt install -y kubelet kubeadm kubectl
 [root@k8s-master-212 ~]#systemctl enable kubelet
 [root@k8s-master-212 ~]#kubeadm version
 ​

5、安装cri-docker

 #所有节点执行:
 [root@k8s-master-212 ~]#vim /usr/lib/systemd/system/cri-docker.service
 #ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
 ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8 --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d
 ​
 [root@k8s-master-212 ~]#systemctl daemon-reload && systemctl restart cri-docker.service
 [root@k8s-master-212 ~]#systemctl status cri-docker
 #所有节点执行:
 #配置kubelet
 [root@k8s-master-212 ~]#mkdir /etc/sysconfig
 [root@k8s-master-212 ~]#vim /etc/sysconfig/kubelet
 KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
 ​
 [root@k8s-master-212 ~]#cat /etc/sysconfig/kubelet
 KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"

6、初始化第一个master

 #第一主节点执行:
 [root@k8s-master-212 ~]#kubeadm config images list
 registry.k8s.io/kube-apiserver:v1.25.3
 registry.k8s.io/kube-controller-manager:v1.25.3
 registry.k8s.io/kube-scheduler:v1.25.3
 registry.k8s.io/kube-proxy:v1.25.3
 registry.k8s.io/pause:3.8
 registry.k8s.io/etcd:3.5.4-0
 registry.k8s.io/coredns/coredns:v1.9.3
 [root@k8s-master-212 ~]#kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --cri-socket unix:///run/cri-dockerd.sock
 ​
 ​
 [root@k8s-master-212 ~]#kubeadm init --control-plane-endpoint="kubeapi.wang.org" --kubernetes-version=v1.25.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --cri-socket unix:///run/cri-dockerd.sock --upload-certs --image-repository registry.aliyuncs.com/google_containers
 ​
 #@@@@@@@@@@@@@@@@@@@@@@@@@@@@出现如下提示,代表初始化成功@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
 Your Kubernetes control-plane has initialized successfully!
 ​
 To start using your cluster, you need to run the following as a regular user:
 ​
   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
 ​
 Alternatively, if you are the root user, you can run:
 ​
   export KUBECONFIG=/etc/kubernetes/admin.conf
 ​
 You should now deploy a pod network to the cluster.
 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
   https://kubernetes.io/docs/concepts/cluster-administration/addons/
 ​
 You can now join any number of the control-plane node running the following command on each as root:
 ​
   kubeadm join kubeapi.wang.org:6443 --token pskenf.k86i7t65t1ia1tp4 \
     --discovery-token-ca-cert-hash sha256:130544842dd77b5631a38af8473caf6e95797130b2796d7600d6744a3490fc01 \
     --control-plane --certificate-key 8a305652e4314718c265a1b714f28f662a3903bc38031891f789035a68d24a50
 ​
 Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
 As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
 "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
 ​
 Then you can join any number of worker nodes by running the following on each as root:
 ​
 kubeadm join kubeapi.wang.org:6443 --token pskenf.k86i7t65t1ia1tp4 \
     --discovery-token-ca-cert-hash sha256:130544842dd77b5631a38af8473caf6e95797130b2796d7600d6744a3490fc01 
 ​

image-20221105185944012

 #第一主节点执行:
 [root@k8s-master-212 ~]#  mkdir -p $HOME/.kube
 [root@k8s-master-212 ~]#  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 [root@k8s-master-212 ~]#  sudo chown $(id -u):$(id -g) $HOME/.kube/config

7、第二个maste、第三个master加入集群

 #第二主节点、第三主节点执行:
 [root@k8s-master-213 ~]#kubeadm join kubeapi.wang.org:6443 --token pskenf.k86i7t65t1ia1tp4 --discovery-token-ca-cert-hash sha256:130544842dd77b5631a38af8473caf6e95797130b2796d7600d6744a3490fc01 --control-plane --certificate-key 8a305652e4314718c265a1b714f28f662a3903bc38031891f789035a68d24a50 --cri-socket unix:///run/cri-dockerd.sock
 ​

8、node节点加入集群

 #所有node节点执行:
 [root@k8s-node-215 ~]#kubeadm join kubeapi.wang.org:6443 --token pskenf.k86i7t65t1ia1tp4 --discovery-token-ca-cert-hash sha256:130544842dd77b5631a38af8473caf6e95797130b2796d7600d6744a3490fc01 --cri-socket unix:///run/cri-dockerd.sock
 #验证节点是否加入:
 [root@k8s-master-212 ~]#kubectl get nodes
 NAME             STATUS     ROLES           AGE     VERSION
 k8s-master-212   NotReady   control-plane   9m49s   v1.25.3
 k8s-master-213   NotReady   control-plane   2m3s    v1.25.3
 k8s-master-214   NotReady   control-plane   37s     v1.25.3
 k8s-node-215     NotReady   <none>          19s     v1.25.3
 k8s-node-216     NotReady   <none>          16s     v1.25.3
 k8s-node-217     NotReady   <none>          15s     v1.25.3

9、部署calico

 #第一主节点执行:
 # 提前下载calico
 [root@k8s-master-212 manifests]#pwd
 /root/calico-3.24.4/manifests
 ​
 [root@k8s-master-212 ~]#kubectl apply -f calico.yaml
 ​
 [root@k8s-master-212 ~]#kubectl get nodes
 NAME             STATUS   ROLES           AGE     VERSION
 k8s-master-212   Ready    control-plane   15m     v1.25.3
 k8s-master-213   Ready    control-plane   7m32s   v1.25.3
 k8s-master-214   Ready    control-plane   6m6s    v1.25.3
 k8s-node-215     Ready    <none>          5m48s   v1.25.3
 k8s-node-216     Ready    <none>          5m45s   v1.25.3
 k8s-node-217     Ready    <none>          5m44s   v1.25.3
 ​
 [root@k8s-master-212 manifests]#kubectl get pod -n kube-system
 NAME                                      READY   STATUS    RESTARTS      AGE
 calico-kube-controllers-f79f7749d-hxc2q   1/1     Running   0             5m20s
 calico-node-7j2bb                         1/1     Running   0             5m20s
 calico-node-8v5m8                         1/1     Running   0             5m20s
 calico-node-9qqmk                         1/1     Running   0             5m20s
 calico-node-jkv75                         1/1     Running   0             5m20s
 calico-node-l2f4v                         1/1     Running   0             5m20s
 calico-node-t4fmm                         1/1     Running   0             5m20s
 coredns-c676cc86f-8xx8g                   1/1     Running   0             92m
 coredns-c676cc86f-g8jpl                   1/1     Running   0             92m
 etcd-k8s-master-212                       1/1     Running   0             92m
 etcd-k8s-master-213                       1/1     Running   0             85m
 etcd-k8s-master-214                       1/1     Running   0             83m
 kube-apiserver-k8s-master-212             1/1     Running   0             93m
 kube-apiserver-k8s-master-213             1/1     Running   1 (85m ago)   85m
 kube-apiserver-k8s-master-214             1/1     Running   0             83m
 kube-controller-manager-k8s-master-212    1/1     Running   2 (80m ago)   93m
 kube-controller-manager-k8s-master-213    1/1     Running   0             85m
 kube-controller-manager-k8s-master-214    1/1     Running   0             83m
 kube-proxy-dgvsm                          1/1     Running   0             83m
 kube-proxy-fsx7x                          1/1     Running   0             83m
 kube-proxy-k98xv                          1/1     Running   0             83m
 kube-proxy-sr5ft                          1/1     Running   0             85m
 kube-proxy-wmjxv                          1/1     Running   0             83m
 kube-proxy-zb8lr                          1/1     Running   0             92m
 kube-scheduler-k8s-master-212             1/1     Running   1 (80m ago)   92m
 kube-scheduler-k8s-master-213             1/1     Running   0             85m
 kube-scheduler-k8s-master-214             1/1     Running   0             83m
 ​
 ​

10、安装dashboard

 #第一节点执行:
 #下载链接:https://codeload.github.com/kubernetes/dashboard/tar.gz/refs/tags/v2.6.0
 ​
 [root@k8s-master-212 ~]#vim dashboard-v2.6.0.yaml
 # Copyright 2017 The Kubernetes Authors.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
 # You may obtain a copy of the License at
 #
 #     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
 ​
 apiVersion: v1
 kind: Namespace
 metadata:
   name: kubernetes-dashboard
 ​
 ---
 ​
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kubernetes-dashboard
 ​
 ---
 ​
 kind: Service
 apiVersion: v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kubernetes-dashboard
 spec:
   type: NodePort
   ports:
     - port: 443
       targetPort: 8443
       nodePort: 30000
 #注意:nodePort端口这里指定的是30000      
   selector:
     k8s-app: kubernetes-dashboard
 ​
 ---
 ​
 apiVersion: v1
 kind: Secret
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard-certs
   namespace: kubernetes-dashboard
 type: Opaque
 ​
 ---
 ​
 apiVersion: v1
 kind: Secret
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard-csrf
   namespace: kubernetes-dashboard
 type: Opaque
 data:
   csrf: ""
 ​
 ---
 ​
 apiVersion: v1
 kind: Secret
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard-key-holder
   namespace: kubernetes-dashboard
 type: Opaque
 ​
 ---
 ​
 kind: ConfigMap
 apiVersion: v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard-settings
   namespace: kubernetes-dashboard
 ​
 ---
 ​
 kind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kubernetes-dashboard
 rules:
   # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
   - apiGroups: [""]
     resources: ["secrets"]
     resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
     verbs: ["get", "update", "delete"]
     # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
   - apiGroups: [""]
     resources: ["configmaps"]
     resourceNames: ["kubernetes-dashboard-settings"]
     verbs: ["get", "update"]
     # Allow Dashboard to get metrics.
   - apiGroups: [""]
     resources: ["services"]
     resourceNames: ["heapster", "dashboard-metrics-scraper"]
     verbs: ["proxy"]
   - apiGroups: [""]
     resources: ["services/proxy"]
     resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
     verbs: ["get"]
 ​
 ---
 ​
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
 rules:
   # Allow Metrics Scraper to get metrics from the Metrics server
   - apiGroups: ["metrics.k8s.io"]
     resources: ["pods", "nodes"]
     verbs: ["get", "list", "watch"]
 ​
 ---
 ​
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kubernetes-dashboard
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kubernetes-dashboard
 subjects:
   - kind: ServiceAccount
     name: kubernetes-dashboard
     namespace: kubernetes-dashboard
 ​
 ---
 ​
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kubernetes-dashboard
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kubernetes-dashboard
 subjects:
   - kind: ServiceAccount
     name: kubernetes-dashboard
     namespace: kubernetes-dashboard
 ​
 ---
 ​
 kind: Deployment
 apiVersion: apps/v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kubernetes-dashboard
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s-app: kubernetes-dashboard
   template:
     metadata:
       labels:
         k8s-app: kubernetes-dashboard
     spec:
       securityContext:
         seccompProfile:
           type: RuntimeDefault
       containers:
         - name: kubernetes-dashboard
           image: kubernetesui/dashboard:v2.6.0
           imagePullPolicy: Always
           ports:
             - containerPort: 8443
               protocol: TCP
           args:
             - --auto-generate-certificates
             - --namespace=kubernetes-dashboard
             - --token-ttl=43200
             # Uncomment the following line to manually specify Kubernetes API server Host
             # If not specified, Dashboard will attempt to auto discover the API server and connect
             # to it. Uncomment only if the default does not work.
             # - --apiserver-host=http://my-address:port
           volumeMounts:
             - name: kubernetes-dashboard-certs
               mountPath: /certs
               # Create on-disk volume to store exec logs
             - mountPath: /tmp
               name: tmp-volume
           livenessProbe:
             httpGet:
               scheme: HTTPS
               path: /
               port: 8443
             initialDelaySeconds: 30
             timeoutSeconds: 30
           securityContext:
             allowPrivilegeEscalation: false
             readOnlyRootFilesystem: true
             runAsUser: 1001
             runAsGroup: 2001
       volumes:
         - name: kubernetes-dashboard-certs
           secret:
             secretName: kubernetes-dashboard-certs
         - name: tmp-volume
           emptyDir: {}
       serviceAccountName: kubernetes-dashboard
       nodeSelector:
         "kubernetes.io/os": linux
       # Comment the following tolerations if Dashboard must not be deployed on master
       tolerations:
         - key: node-role.kubernetes.io/master
           effect: NoSchedule
 ​
 ---
 ​
 kind: Service
 apiVersion: v1
 metadata:
   labels:
     k8s-app: dashboard-metrics-scraper
   name: dashboard-metrics-scraper
   namespace: kubernetes-dashboard
 spec:
   ports:
     - port: 8000
       targetPort: 8000
   selector:
     k8s-app: dashboard-metrics-scraper
 ​
 ---
 ​
 kind: Deployment
 apiVersion: apps/v1
 metadata:
   labels:
     k8s-app: dashboard-metrics-scraper
   name: dashboard-metrics-scraper
   namespace: kubernetes-dashboard
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s-app: dashboard-metrics-scraper
   template:
     metadata:
       labels:
         k8s-app: dashboard-metrics-scraper
     spec:
       securityContext:
         seccompProfile:
           type: RuntimeDefault
       containers:
         - name: dashboard-metrics-scraper
           image: kubernetesui/metrics-scraper:v1.0.8
           ports:
             - containerPort: 8000
               protocol: TCP
           livenessProbe:
             httpGet:
               scheme: HTTP
               path: /
               port: 8000
             initialDelaySeconds: 30
             timeoutSeconds: 30
           volumeMounts:
           - mountPath: /tmp
             name: tmp-volume
           securityContext:
             allowPrivilegeEscalation: false
             readOnlyRootFilesystem: true
             runAsUser: 1001
             runAsGroup: 2001
       serviceAccountName: kubernetes-dashboard
       nodeSelector:
         "kubernetes.io/os": linux
       # Comment the following tolerations if Dashboard must not be deployed on master
       tolerations:
         - key: node-role.kubernetes.io/master
           effect: NoSchedule
       volumes:
         - name: tmp-volume
           emptyDir: {}
 ​
 ​
 [root@k8s-master-212 ~]#vim admin-secret.yaml
 apiVersion: v1
 kind: Secret
 type: kubernetes.io/service-account-token
 metadata:
   name: dashboard-admin-user
   namespace: kubernetes-dashboard
   annotations:
     kubernetes.io/service-account.name: "admin-user"
 ​
 [root@k8s-master-212 ~]#vim admin-user.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: admin-user
   namespace: kubernetes-dashboard
 ​
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: admin-user
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: admin-user
   namespace: kubernetes-dashboard
 ​
 ​
 [root@k8s-master-212 ~]#kubectl apply -f dashboard-v2.6.0.yaml -f admin-user.yaml -f admin-secret.yaml 
 namespace/kubernetes-dashboard created
 serviceaccount/kubernetes-dashboard created
 service/kubernetes-dashboard created
 secret/kubernetes-dashboard-certs created
 secret/kubernetes-dashboard-csrf created
 secret/kubernetes-dashboard-key-holder created
 configmap/kubernetes-dashboard-settings created
 role.rbac.authorization.k8s.io/kubernetes-dashboard created
 clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
 rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
 clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
 deployment.apps/kubernetes-dashboard created
 service/dashboard-metrics-scraper created
 deployment.apps/dashboard-metrics-scraper created
 serviceaccount/admin-user created
 clusterrolebinding.rbac.authorization.k8s.io/admin-user created
 secret/dashboard-admin-user created
 ​
 [root@k8s-master-212 ~]#kubectl get secret -A
 NAMESPACE              NAME                              TYPE                                  DATA   AGE
 kube-system            bootstrap-token-pskenf            bootstrap.kubernetes.io/token         6      146m
 kubernetes-dashboard   dashboard-admin-user              kubernetes.io/service-account-token   3      4m5s
 kubernetes-dashboard   kubernetes-dashboard-certs        Opaque                                0      4m5s
 kubernetes-dashboard   kubernetes-dashboard-csrf         Opaque                                1      4m5s
 kubernetes-dashboard   kubernetes-dashboard-key-holder   Opaque                                2      4m5s
 ​
 [root@k8s-master-212 ~]#kubectl describe secrets -n kubernetes-dashboard dashboard-admin
 Name:         dashboard-admin-user
 Namespace:    kubernetes-dashboard
 Labels:       <none>
 Annotations:  kubernetes.io/service-account.name: admin-user
               kubernetes.io/service-account.uid: a8750def-e114-4bee-92b2-172068d2cab8
 ​
 Type:  kubernetes.io/service-account-token
 ​
 Data
 ====
 ca.crt:     1099 bytes
 namespace:  20 bytes
 token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImdJNk12Yy01YjcwcWRwUEpxZHNRSHhHdWFUcEZZS094UzB6Q29OVnI5OGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTg3NTBkZWYtZTExNC00YmVlLTkyYjItMTcyMDY4ZDJjYWI4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.wPEGvjiYHEZwEOm2nHGZgQ3L2mE4STxWGevPSNmw78FB5JWyH7iYtAMSVsS24YhrreP6VOeJeZD9cZMfVJnx6QOJ6HhBz7wF3MXBlx0KGdTAfs4EuZYEtKT07dXHJwBSAf1AsJdsTIazn0dp7kw-qBPZpMrCLEhIQ665FJOyXMiYV4Jn07h-lr-iRxzYpN9RjdKqHn8fOpO8T0gpeQ4WpjSsx_EBa8yhoqgOLWCtjl5sDFNOlE9oRWS5SpIZKR9rshrF5ySdvecrKKDtRmJhlnsJyycTPCmz4DFmPRQoQnQJkvKpTdvNmIZVXTSLiR4VbP8kIGPkTz2UgyQKhuQh6A
 ​
 [root@k8s-master-212 ~]#kubectl get svc -A
 NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
 default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  147m
 kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   147m
 kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.103.23.119    <none>        8000/TCP                 5m17s
 kubernetes-dashboard   kubernetes-dashboard        NodePort    10.103.140.218   <none>        443:30000/TCP            5m18s
 ​

image-20221105212643746

image-20221105213007436

image-20221105213943068

image-20221105214414881

image-20221105214503660

11、安装 Kuboard

 #第一主节点执行:
 #拉取必要镜像
 [root@k8s-master-212 ~]#docker pull eipwork/kuboard-agent:v3
 [root@k8s-master-212 ~]#docker pull eipwork/etcd-host:3.4.16-1
 [root@k8s-master-212 ~]#docker pull eipwork/kuboard:v3
 [root@k8s-master-212 ~]#docker pull questdb/questdb:6.0.4
 #kuboadr-v3.yaml下载地址:https://kuboard.cn/install/v3/install-in-k8s.html#%E5%AE%89%E8%A3%85
 [root@k8s-master-212 ~]#vim kuboard-v3.yaml      
 ---
 apiVersion: v1
 kind: Namespace
 metadata:
   name: kuboard
 ​
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kuboard-v3-config
   namespace: kuboard
 data:
   # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-built-in.html
   # [common]
   KUBOARD_SERVER_NODE_PORT: '30080'
   KUBOARD_AGENT_SERVER_UDP_PORT: '30081'
   KUBOARD_AGENT_SERVER_TCP_PORT: '30081'
   KUBOARD_SERVER_LOGRUS_LEVEL: info  # error / debug / trace
   # KUBOARD_AGENT_KEY 是 Agent 与 Kuboard 通信时的密钥,请修改为一个任意的包含字母、数字的32位字符串,此密钥变更后,需要删除 Kuboard Agent 重新导入。
   KUBOARD_AGENT_KEY: 32b7d6572c6255211b4eec9009e4a816
   KUBOARD_AGENT_IMAG: eipwork/kuboard-agent
   KUBOARD_QUESTDB_IMAGE: questdb/questdb:6.0.5
   KUBOARD_DISABLE_AUDIT: 'false' # 如果要禁用 Kuboard 审计功能,将此参数的值设置为 'true',必须带引号。
 ​
   # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-gitlab.html
   # [gitlab login]
   # KUBOARD_LOGIN_TYPE: "gitlab"
   # KUBOARD_ROOT_USER: "your-user-name-in-gitlab"
   # GITLAB_BASE_URL: "http://gitlab.mycompany.com"
   # GITLAB_APPLICATION_ID: "7c10882aa46810a0402d17c66103894ac5e43d6130b81c17f7f2d8ae182040b5"
   # GITLAB_CLIENT_SECRET: "77c149bd3a4b6870bffa1a1afaf37cba28a1817f4cf518699065f5a8fe958889"
   
   # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-github.html
   # [github login]
   # KUBOARD_LOGIN_TYPE: "github"
   # KUBOARD_ROOT_USER: "your-user-name-in-github"
   # GITHUB_CLIENT_ID: "17577d45e4de7dad88e0"
   # GITHUB_CLIENT_SECRET: "ff738553a8c7e9ad39569c8d02c1d85ec19115a7"
 ​
   # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-ldap.html
   # [ldap login]
   # KUBOARD_LOGIN_TYPE: "ldap"
   # KUBOARD_ROOT_USER: "your-user-name-in-ldap"
   # LDAP_HOST: "ldap-ip-address:389"
   # LDAP_BIND_DN: "cn=admin,dc=example,dc=org"
   # LDAP_BIND_PASSWORD: "admin"
   # LDAP_BASE_DN: "dc=example,dc=org"
   # LDAP_FILTER: "(objectClass=posixAccount)"
   # LDAP_ID_ATTRIBUTE: "uid"
   # LDAP_USER_NAME_ATTRIBUTE: "uid"
   # LDAP_EMAIL_ATTRIBUTE: "mail"
   # LDAP_DISPLAY_NAME_ATTRIBUTE: "cn"
   # LDAP_GROUP_SEARCH_BASE_DN: "dc=example,dc=org"
   # LDAP_GROUP_SEARCH_FILTER: "(objectClass=posixGroup)"
   # LDAP_USER_MACHER_USER_ATTRIBUTE: "gidNumber"
   # LDAP_USER_MACHER_GROUP_ATTRIBUTE: "gidNumber"
   # LDAP_GROUP_NAME_ATTRIBUTE: "cn"
 ​
 ---
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kuboard-boostrap
   namespace: kuboard
 ​
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kuboard-boostrap-crb
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: kuboard-boostrap
   namespace: kuboard
 ​
 ---
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   labels:
     k8s.kuboard.cn/name: kuboard-etcd
   name: kuboard-etcd
   namespace: kuboard
 spec:
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s.kuboard.cn/name: kuboard-etcd
   template:
     metadata:
       labels:
         k8s.kuboard.cn/name: kuboard-etcd
     spec:
       affinity:
         nodeAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
             nodeSelectorTerms:
               - matchExpressions:
                   - key: node-role.kubernetes.io/master
                     operator: Exists
               - matchExpressions:
                   - key: node-role.kubernetes.io/control-plane
                     operator: Exists
               - matchExpressions:
                   - key: k8s.kuboard.cn/role
                     operator: In
                     values:
                       - etcd
       containers:
         - env:
             - name: HOSTNAME
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: spec.nodeName
             - name: HOSTIP
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: status.hostIP
           image: 'eipwork/etcd-host:3.4.16-2'
           imagePullPolicy: Always
           name: etcd
           ports:
             - containerPort: 2381
               hostPort: 2381
               name: server
               protocol: TCP
             - containerPort: 2382
               hostPort: 2382
               name: peer
               protocol: TCP
           livenessProbe:
             failureThreshold: 3
             httpGet:
               path: /health
               port: 2381
               scheme: HTTP
             initialDelaySeconds: 30
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 1
           volumeMounts:
             - mountPath: /data
               name: data
       dnsPolicy: ClusterFirst
       hostNetwork: true
       restartPolicy: Always
       serviceAccount: kuboard-boostrap
       serviceAccountName: kuboard-boostrap
       tolerations:
         - key: node-role.kubernetes.io/master
           operator: Exists
         - key: node-role.kubernetes.io/control-plane
           operator: Exists
       volumes:
         - hostPath:
             path: /usr/share/kuboard/etcd
           name: data
   updateStrategy:
     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
 ​
 ​
 ---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   annotations: {}
   labels:
     k8s.kuboard.cn/name: kuboard-v3
   name: kuboard-v3
   namespace: kuboard
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s.kuboard.cn/name: kuboard-v3
   template:
     metadata:
       labels:
         k8s.kuboard.cn/name: kuboard-v3
     spec:
       affinity:
         nodeAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
             - preference:
                 matchExpressions:
                   - key: node-role.kubernetes.io/master
                     operator: Exists
               weight: 100
             - preference:
                 matchExpressions:
                   - key: node-role.kubernetes.io/control-plane
                     operator: Exists
               weight: 100
       containers:
         - env:
             - name: HOSTIP
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: status.hostIP
             - name: HOSTNAME
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: spec.nodeName
           envFrom:
             - configMapRef:
                 name: kuboard-v3-config
           image: 'eipwork/kuboard:v3'
           imagePullPolicy: Always
           livenessProbe:
             failureThreshold: 3
             httpGet:
               path: /kuboard-resources/version.json
               port: 80
               scheme: HTTP
             initialDelaySeconds: 30
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 1
           name: kuboard
           ports:
             - containerPort: 80
               name: web
               protocol: TCP
             - containerPort: 443
               name: https
               protocol: TCP
             - containerPort: 10081
               name: peer
               protocol: TCP
             - containerPort: 10081
               name: peer-u
               protocol: UDP
           readinessProbe:
             failureThreshold: 3
             httpGet:
               path: /kuboard-resources/version.json
               port: 80
               scheme: HTTP
             initialDelaySeconds: 30
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 1
           resources: {}
           # startupProbe:
           #   failureThreshold: 20
           #   httpGet:
           #     path: /kuboard-resources/version.json
           #     port: 80
           #     scheme: HTTP
           #   initialDelaySeconds: 5
           #   periodSeconds: 10
           #   successThreshold: 1
           #   timeoutSeconds: 1
       dnsPolicy: ClusterFirst
       restartPolicy: Always
       serviceAccount: kuboard-boostrap
       serviceAccountName: kuboard-boostrap
       tolerations:
         - key: node-role.kubernetes.io/master
           operator: Exists
 ​
 ---
 apiVersion: v1
 kind: Service
 metadata:
   annotations: {}
   labels:
     k8s.kuboard.cn/name: kuboard-v3
   name: kuboard-v3
   namespace: kuboard
 spec:
   ports:
     - name: web
       nodePort: 30080
       port: 80
       protocol: TCP
       targetPort: 80
     - name: tcp
       nodePort: 30081
       port: 10081
       protocol: TCP
       targetPort: 10081
     - name: udp
       nodePort: 30081
       port: 10081
       protocol: UDP
       targetPort: 10081
   selector:
     k8s.kuboard.cn/name: kuboard-v3
   sessionAffinity: None
   type: NodePort
 ​
 ​
 [root@k8s-master-212 ~]#kubectl apply -f kuboard-v3.yaml
 [root@k8s-master-212 ~]#kubectl label node k8s-master-212 node-role.kubernetes.io/master=
 node/k8s-master-212 labeled
 [root@k8s-master-212 ~]#kubectl label node k8s-master-213 node-role.kubernetes.io/master=
 node/k8s-master-213 labeled
 [root@k8s-master-212 ~]#kubectl label node k8s-master-214 node-role.kubernetes.io/master=
 node/k8s-master-214 labeled
 [root@k8s-master-212 ~]#kubectl label node k8s-master-214 node-role.^Cbernetes.io/master=
 [root@k8s-master-212 ~]#kubectl get pods -A -n kuboard
 NAMESPACE              NAME                                         READY   STATUS    RESTARTS        AGE
 kube-system            calico-kube-controllers-f79f7749d-hxc2q      1/1     Running   0               122m
 kube-system            calico-node-7j2bb                            1/1     Running   0               122m
 kube-system            calico-node-8v5m8                            1/1     Running   0               122m
 kube-system            calico-node-9qqmk                            1/1     Running   0               122m
 kube-system            calico-node-jkv75                            1/1     Running   0               122m
 kube-system            calico-node-l2f4v                            1/1     Running   0               122m
 kube-system            calico-node-t4fmm                            1/1     Running   0               122m
 kube-system            coredns-c676cc86f-8xx8g                      1/1     Running   0               3h29m
 kube-system            coredns-c676cc86f-g8jpl                      1/1     Running   0               3h29m
 kube-system            etcd-k8s-master-212                          1/1     Running   0               3h30m
 kube-system            etcd-k8s-master-213                          1/1     Running   0               3h22m
 kube-system            etcd-k8s-master-214                          1/1     Running   0               3h21m
 kube-system            kube-apiserver-k8s-master-212                1/1     Running   0               3h30m
 kube-system            kube-apiserver-k8s-master-213                1/1     Running   1 (3h22m ago)   3h22m
 kube-system            kube-apiserver-k8s-master-214                1/1     Running   0               3h20m
 kube-system            kube-controller-manager-k8s-master-212       1/1     Running   2 (3h17m ago)   3h30m
 kube-system            kube-controller-manager-k8s-master-213       1/1     Running   0               3h22m
 kube-system            kube-controller-manager-k8s-master-214       1/1     Running   0               3h20m
 kube-system            kube-proxy-dgvsm                             1/1     Running   0               3h20m
 kube-system            kube-proxy-fsx7x                             1/1     Running   0               3h20m
 kube-system            kube-proxy-k98xv                             1/1     Running   0               3h21m
 kube-system            kube-proxy-sr5ft                             1/1     Running   0               3h22m
 kube-system            kube-proxy-wmjxv                             1/1     Running   0               3h20m
 kube-system            kube-proxy-zb8lr                             1/1     Running   0               3h29m
 kube-system            kube-scheduler-k8s-master-212                1/1     Running   1 (3h17m ago)   3h30m
 kube-system            kube-scheduler-k8s-master-213                1/1     Running   0               3h22m
 kube-system            kube-scheduler-k8s-master-214                1/1     Running   0               3h20m
 kubernetes-dashboard   dashboard-metrics-scraper-64bcc67c9c-kt9sr   1/1     Running   0               68m
 kubernetes-dashboard   kubernetes-dashboard-86975ff5cb-hqsd5        1/1     Running   0               68m
 kuboard                kuboard-agent-2-85bf8c9d75-269mh             1/1     Running   1 (52s ago)     68s
 kuboard                kuboard-agent-6bc489fdc5-2bhl9               1/1     Running   1 (51s ago)     68s
 kuboard                kuboard-etcd-8qrd4                           1/1     Running   0               9m41s
 kuboard                kuboard-etcd-jxr68                           1/1     Running   0               9m41s
 kuboard                kuboard-etcd-zht2z                           1/1     Running   0               9m41s
 kuboard                kuboard-v3-664cc56698-kgvkk                  1/1     Running   5 (90s ago)     9m41s
 ​
 [root@k8s-master-212 ~]#kubectl get svc -A
 NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
 default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                                        3h30m
 kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP                         3h30m
 kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.103.23.119    <none>        8000/TCP                                       68m
 kubernetes-dashboard   kubernetes-dashboard        NodePort    10.103.140.218   <none>        443:30000/TCP                                  68m
 kuboard                kuboard-v3                  NodePort    10.106.214.183   <none>        80:30080/TCP,10081:30081/TCP,10081:30081/UDP   10m
 ​
 #浏览器访问任一node的IP:30080,用户名:admin 密码:Kuboard123

image-20221105222905160

image-20221105223950302

image-20221105224153220

image-20221105224221875

12、遇到的错误

 #执行apt install ./cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb -y如果提示如下信息:
 N: Download is performed unsandboxed as root as file '/root/cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
 ​
 #方法1、重启服务
 systemctl restart cri-docker.service
 ​
 #方法2、把文件拷贝至tmp目录下再安装
 ​
 =============================================================================================
 #calico起不来,用此命令查看:
 [root@k8s-master-212 ~]#kubectl describe pod calico-kube-controllers-8f5fb46c-xkmq7 -n kube-system   
 #报错如下:
 ​
 Warning  FailedCreatePodSandBox  22m (x4 over 22m)    kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0781e60f3d7090ea1060ab938528d6db9a180cc6370d03eda0ef43535fb3e0ef" network for pod "calico-kube-controllers-8f5fb46c-rvgfd": networkPlugin cni failed to set up pod "calico-kube-controllers-8f5fb46c-rvgfd_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
   Warning  BackOff                 3m5s (x84 over 21m)  kubelet            Back-off restarting failed containe
   
 #所有节点创建/var/lib/calico
 mkdir /var/lib/calico
 ​
 ===============================================================================================
 #安装calico如果报如下错误:是因为calico 与 k8s版本不匹配,下载新版calico即可
 error: resource mapping not found for name: "calico-kube-controllers" namespace: "kube-system" from "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
 ​
 #下载地址:https://github.com/projectcalico/calico/tree/v3.24.4
 ​
 ​
 ==============================================================================================
 #pod状态是Terminating,无法删除,使用--force删除
 [root@k8s-master-212 ~]#kubectl get pod -A
 NAMESPACE     NAME                                     READY   STATUS        RESTARTS      AGE
 kube-system   calico-kube-controllers-8f5fb46c-5sz4z   0/1     Terminating   6             11m
 ​
 [root@k8s-master-212 ~]#kubectl delete --force pod calico-kube-controllers-8f5fb46c-5sz4z  -n kube-syste

 

标签:kuboard,212,kubernetes,master,dashboard,k8s,calico,kube
From: https://www.cnblogs.com/wdy001/p/16861656.html

相关文章

  • Kubernetes控制器工作流程
    Kubernetes控制器会监视资源的创建/更新/删除事件,并触发Reconcile函数作为响应。Kubernetes水平触发API的实现方式为:监视系统的实际状态,并与对象的Spec中定义的期望状态进......
  • Kubernetes(2)_概念和架构
    K8S概述和特性kubernetes,简称K8S,是一个开源的用于管理云平台多个主机上的容器化的应用.k8s是谷歌在2014年开源的容器化集群管理系统适用k8s进行容器化应用部署适......
  • kubernetes介绍
    1、介绍kubernetes(简称K8S)是一个以“应用”为中心,管理容器生命周期,容器之间关系,集群资源调度的容器编排工具,是一个面向平台的平台。为什么要简称K8S呢? 1、字母k和字母s中间......
  • 什么是kubernetes,kubernetes有什么作用?
    1、介绍kubernetes(简称K8S)是一个以“应用”为中心,管理容器生命周期,容器之间关系,集群资源调度的容器编排工具,是一个面向平台的平台。为什么要简称K8S呢?1、字母k和字母s中......
  • kubernetes 之二进制方式部署
      我的资料链接:https://pan.baidu.com/s/18g0sar1N-FMhzY-FCMqOog 两种集群架构图    多master需要在集群上面加个lb,所有的node都需要连接lb,lb帮你转发到a......
  • Kubernetes_从云原生到kubernetes
    一、前言二、kubernetes和云原生CloudNative直接翻译为云原生,云原生官网:https://www.cncf.io/CNCF,表示CloudNativeComputingFoudation,翻译为云原生计算......
  • 基于docker和cri-dockerd部署kubernetes v1.25.3
    基于docker和cri-dockerd部署kubernetesv1.25.31、环境准备1-1、主机清单主机名IP地址系统版本k8s-master01k8s-master01.wang.orgkubeapi.wang.orgkube......
  • 使用CRD扩展Kubernetes API
    本文是如何创建 CRD 来扩展KubernetesAPI的教程。CRD 是用来扩展Kubernetes最常用的方式,在ServiceMesh和 Operator 中也被大量使用。因此读者如果想在Kubern......
  • 在k8s(kubernetes) 上安装 ingress V1.1.0
    Ingress公开了从集群外部到集群内服务的HTTP和HTTPS路由。流量路由由Ingress资源上定义的规则控制。下面是一个将所有流量都发送到同一Service的简单Ingress示例......
  • 在 k8s(kubernetes)中使用 Loki 进行日志监控
    安装helm环境[root@hello~/yaml]#[root@hello~/yaml]#curlhttps://baltocdn.com/helm/signing.asc|sudoapt-keyadd-[root@hello~/yaml]#sudoapt-getinstallap......