首页 > 其他分享 >K8S之prometheus-operator监控

K8S之prometheus-operator监控

时间:2022-10-27 09:35:32浏览次数:66  
标签:rw -- yaml prometheus 2022 operator K8S root

prometheus-operator

1. Prometheus Operator介绍

介绍文章:http://t.zoukankan.com/twobrother-p-11164391.html

2016年年末,CoreOs引入了Operator 模式,并发布了Prometheus Operator 作为Operator模式的工作示例。Prometheus Operator自动创建和管理Prometheus监控实例。

Prometheus Operator的任务是使得在Kubernetes运行Prometheus仅可能容易,同时保留可配置性以及使Kubernetes配置原生。

Prometheus Operator使我们的生活更容易——部署和维护。

2. 它如何工作

为了理解这个问题,我们首先需要了解Prometheus Operator得工作原理。

Prometheus Operator架构图.

image-20221014172101160

我们成功部署 Prometheus Operator后可以看到一个新的CRDs(Custom Resource Defination):

  • Prometheus,定义一个期望的Prometheus deployment
  • ServiceMonitor,声明式指定应该如何监控服务组;Operator根据定义自动创建Prometheusscrape配置。
  • Alertmanager,定义期望的Alertmanager deployment

当服务新版本更新时,将会常见一个新PodPrometheus监控k8s API,因此当它检测到这种变化时,它将为这个新服务(pod)创建一组新的配置。

3. ServiceMonitor

Prometheus Operator使用一个CRD,叫做 ServiceMonitor 将配置抽象到目标。
下面是个ServiceMonitor的示例:

apiVersion: monitoring.coreos.com/v1alpha1
kind: ServiceMonitor
metadata:
  name: frontend
  labels:
    tier: frontend
spec:
  selector:
    matchLabels:
      tier: frontend
  endpoints:
  - port: web            # 指定exporter端口,这里指定的是endpoint的名称
    interval: 10s        # 刷新间隔时间

这仅仅是定义一组服务应该如何被监控。现在我们需要定义一个包含了该ServiceMonitorPrometheus实例到其配置:

apiVersion: monitoring.coreos.com/v1alpha1
kind: Prometheus
metadata:
  name: prometheus-frontend
  labels:
    prometheus: frontend
spec:
  version: v1.3.0
  #定义应包括标签为“tier=frontend”的所有ServiceMonitor 到服务器的配置中
  serviceMonitors:
  - selector:
      matchLabels:
        tier: frontend

现在Prometheus将会监控每个带有tier: frontend label的服务。

4. helm安装

先决条件:

  • 部署了Helm

准备好动手操作:

 helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/
 helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring

到目前为止,我们已经在我们的集群中安装了Prometheus OperatorTPR
现在我们来部署PrometheusAlertmanagerGrafana

TIP: 当我使用一个庞大的Helm Charts时,我更倾向于创建一个独立的value.yaml文件将包含我所有自定义的变更。这么做使我和同事为后期的变化和修改更容易。

helm install coreos/kube-prometheus --name kube-prometheus   \
       -f my_changes/prometheus.yaml                           \
       -f my_changes/grafana.yaml                              \
       -f my_changes/alertmanager.yaml

检查一切是否运行正常

 kubectl -n monitoring get po
NAME                                                   READY     STATUS    RESTARTS   AGE
alertmanager-kube-prometheus-0                         2/2       Running   0          1h
kube-prometheus-exporter-kube-state-68dbb4f7c9-tr6rp   2/2       Running   0          1h
kube-prometheus-exporter-node-bqcj4                    1/1       Running   0          1h
kube-prometheus-exporter-node-jmcq2                    1/1       Running   0          1h
kube-prometheus-exporter-node-qnzsn                    1/1       Running   0          1h
kube-prometheus-exporter-node-v4wn8                    1/1       Running   0          1h
kube-prometheus-exporter-node-x5226                    1/1       Running   0          1h
kube-prometheus-exporter-node-z996c                    1/1       Running   0          1h
kube-prometheus-grafana-54c96ffc77-tjl6g               2/2       Running   0          1h
prometheus-kube-prometheus-0                           2/2       Running   0          1h
prometheus-operator-1591343780-5vb5q                   1/1       Running   0          1h

访问下Prometheus UI看一下Targets页面:

 kubectl -n monitoring port-forward prometheus-kube-prometheus-0 9090
Forwarding from 127.0.0.1:9090 -> 9090

浏览器展示如下:

5. yaml文件安装

此安装方法本人亲测有效,用到的yaml文件都打包好了。解压之后直接kubectl apply即可用。会自动监控当前集群的所有node节点和pod。只需更改yaml文件中需要用到的镜像。我这里都推到了公司公网harbor仓库。部分镜像已经打成tar包。直接docker load -i即可用。
kube-state.tar.gz
webhook-dingtalk.tar.gz
prometheus-adapter.tar.gz

5.1 安装

#软件包集成了node Exporter alertmanager grafana prometheus ingress 所有服务的配置,只需解压到K8S master中。
[root@lecode-k8s-master monitor]# ll
total 1820
-rw-r--r-- 1 root root     875 Mar 11  2022 alertmanager-alertmanager.yaml
-rw-r--r-- 1 root root     515 Mar 11  2022 alertmanager-podDisruptionBudget.yaml
-rw-r--r-- 1 root root    4337 Mar 11  2022 alertmanager-prometheusRule.yaml
-rw-r--r-- 1 root root    1483 Mar 14  2022 alertmanager-secret.yaml
-rw-r--r-- 1 root root     301 Mar 11  2022 alertmanager-serviceAccount.yaml
-rw-r--r-- 1 root root     540 Mar 11  2022 alertmanager-serviceMonitor.yaml
-rw-r--r-- 1 root root     614 Mar 11  2022 alertmanager-service.yaml
drwxr-x--- 2 root root    4096 Oct 25 13:49 backsvc #这里是grafana的service配置。nodeport模式。用于外部访问。选择使用
-rw-r--r-- 1 root root     278 Mar 11  2022 blackbox-exporter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     287 Mar 11  2022 blackbox-exporter-clusterRole.yaml
-rw-r--r-- 1 root root    1392 Mar 11  2022 blackbox-exporter-configuration.yaml
-rw-r--r-- 1 root root    3081 Mar 11  2022 blackbox-exporter-deployment.yaml
-rw-r--r-- 1 root root      96 Mar 11  2022 blackbox-exporter-serviceAccount.yaml
-rw-r--r-- 1 root root     680 Mar 11  2022 blackbox-exporter-serviceMonitor.yaml
-rw-r--r-- 1 root root     540 Mar 11  2022 blackbox-exporter-service.yaml
-rw-r--r-- 1 root root    2521 Oct 25 13:36 dingtalk-dep.yaml
-rw-r--r-- 1 root root     721 Mar 11  2022 grafana-dashboardDatasources.yaml
-rw-r--r-- 1 root root 1448347 Mar 11  2022 grafana-dashboardDefinitions.yaml
-rw-r--r-- 1 root root     625 Mar 11  2022 grafana-dashboardSources.yaml
-rw-r--r-- 1 root root    8098 Mar 11  2022 grafana-deployment.yaml
-rw-r--r-- 1 root root      86 Mar 11  2022 grafana-serviceAccount.yaml
-rw-r--r-- 1 root root     398 Mar 11  2022 grafana-serviceMonitor.yaml
-rw-r--r-- 1 root root     468 Mar 30  2022 grafana-service.yaml
drwxr-xr-x 2 root root    4096 Oct 25 13:32 ingress #这里ingress资源也是可以直接用,可以把Prometheus和grafana服务暴露在外部。
-rw-r--r-- 1 root root    2639 Mar 14  2022 kube-prometheus-prometheusRule.yaml
-rw-r--r-- 1 root root    3380 Mar 14  2022 kube-prometheus-prometheusRule.yamlbak
-rw-r--r-- 1 root root   63531 Mar 11  2022 kubernetes-prometheusRule.yaml
-rw-r--r-- 1 root root    6912 Mar 11  2022 kubernetes-serviceMonitorApiserver.yaml
-rw-r--r-- 1 root root     425 Mar 11  2022 kubernetes-serviceMonitorCoreDNS.yaml
-rw-r--r-- 1 root root    6431 Mar 11  2022 kubernetes-serviceMonitorKubeControllerManager.yaml
-rw-r--r-- 1 root root    7629 Mar 11  2022 kubernetes-serviceMonitorKubelet.yaml
-rw-r--r-- 1 root root     530 Mar 11  2022 kubernetes-serviceMonitorKubeScheduler.yaml
-rw-r--r-- 1 root root     464 Mar 11  2022 kube-state-metrics-clusterRoleBinding.yaml
-rw-r--r-- 1 root root    1712 Mar 11  2022 kube-state-metrics-clusterRole.yaml
-rw-r--r-- 1 root root    2934 Oct 25 13:40 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root    3082 Mar 11  2022 kube-state-metrics-prometheusRule.yaml
-rw-r--r-- 1 root root     280 Mar 11  2022 kube-state-metrics-serviceAccount.yaml
-rw-r--r-- 1 root root    1011 Mar 11  2022 kube-state-metrics-serviceMonitor.yaml
-rw-r--r-- 1 root root     580 Mar 11  2022 kube-state-metrics-service.yaml
-rw-r--r-- 1 root root     444 Mar 11  2022 node-exporter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     461 Mar 11  2022 node-exporter-clusterRole.yaml
-rw-r--r-- 1 root root    3047 Mar 11  2022 node-exporter-daemonset.yaml
-rw-r--r-- 1 root root   14356 Apr 11  2022 node-exporter-prometheusRule.yaml
-rw-r--r-- 1 root root     270 Mar 11  2022 node-exporter-serviceAccount.yaml
-rw-r--r-- 1 root root     850 Mar 11  2022 node-exporter-serviceMonitor.yaml
-rw-r--r-- 1 root root     492 Mar 11  2022 node-exporter-service.yaml
-rw-r--r-- 1 root root     482 Mar 11  2022 prometheus-adapter-apiService.yaml
-rw-r--r-- 1 root root     576 Mar 11  2022 prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
-rw-r--r-- 1 root root     494 Mar 11  2022 prometheus-adapter-clusterRoleBindingDelegator.yaml
-rw-r--r-- 1 root root     471 Mar 11  2022 prometheus-adapter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     378 Mar 11  2022 prometheus-adapter-clusterRoleServerResources.yaml
-rw-r--r-- 1 root root     409 Mar 11  2022 prometheus-adapter-clusterRole.yaml
-rw-r--r-- 1 root root    2204 Mar 11  2022 prometheus-adapter-configMap.yaml
-rw-r--r-- 1 root root    2530 Oct 25 13:39 prometheus-adapter-deployment.yaml
-rw-r--r-- 1 root root     506 Mar 11  2022 prometheus-adapter-podDisruptionBudget.yaml
-rw-r--r-- 1 root root     515 Mar 11  2022 prometheus-adapter-roleBindingAuthReader.yaml
-rw-r--r-- 1 root root     287 Mar 11  2022 prometheus-adapter-serviceAccount.yaml
-rw-r--r-- 1 root root     677 Mar 11  2022 prometheus-adapter-serviceMonitor.yaml
-rw-r--r-- 1 root root     501 Mar 11  2022 prometheus-adapter-service.yaml
-rw-r--r-- 1 root root     447 Mar 11  2022 prometheus-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     394 Mar 11  2022 prometheus-clusterRole.yaml
-rw-r--r-- 1 root root    5000 Mar 11  2022 prometheus-operator-prometheusRule.yaml
-rw-r--r-- 1 root root     715 Mar 11  2022 prometheus-operator-serviceMonitor.yaml
-rw-r--r-- 1 root root     499 Mar 11  2022 prometheus-podDisruptionBudget.yaml
-rw-r--r-- 1 root root   14021 Mar 11  2022 prometheus-prometheusRule.yaml
-rw-r--r-- 1 root root    1184 Mar 11  2022 prometheus-prometheus.yaml
-rw-r--r-- 1 root root     471 Mar 11  2022 prometheus-roleBindingConfig.yaml
-rw-r--r-- 1 root root    1547 Mar 11  2022 prometheus-roleBindingSpecificNamespaces.yaml
-rw-r--r-- 1 root root     366 Mar 11  2022 prometheus-roleConfig.yaml
-rw-r--r-- 1 root root    2047 Mar 11  2022 prometheus-roleSpecificNamespaces.yaml
-rw-r--r-- 1 root root     271 Mar 11  2022 prometheus-serviceAccount.yaml
-rw-r--r-- 1 root root     531 Mar 11  2022 prometheus-serviceMonitor.yaml
-rw-r--r-- 1 root root     558 Mar 11  2022 prometheus-service.yaml
drw-r--r-- 2 root root    4096 Oct 24 12:31 setup


#先apply setup目录中的yaml文件。然后apply一级目录下的yaml文件。backsvc中的grafana的service资源清单。根据情况调整为nodeport或ClusterIP。K8S集群会自动在每台K8S节点部署node-exporter并收集数据。登录grafana后初始账号密码为admin admin。添加dashboard即可监控K8S集群
[root@lecode-k8s-master monitor]# cd setup/
[root@lecode-k8s-master setup]#   kubectl apply -f .
[root@lecode-k8s-master setup]# cd ..
[root@lecode-k8s-master monitor]# kubectl apply -f .
[root@lecode-k8s-master monitor]# kubectl get po -n monitoring 
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          74m
alertmanager-main-1                    2/2     Running   0          74m
alertmanager-main-2                    2/2     Running   0          74m
blackbox-exporter-6798fb5bb4-d9m7m     3/3     Running   0          74m
grafana-64668d8465-x7x9z               1/1     Running   0          74m
kube-state-metrics-569d89897b-hlqxj    3/3     Running   0          57m
node-exporter-6vqxg                    2/2     Running   0          74m
node-exporter-7dxh6                    2/2     Running   0          74m
node-exporter-9j5xk                    2/2     Running   0          74m
node-exporter-ftrmn                    2/2     Running   0          74m
node-exporter-qszkn                    2/2     Running   0          74m
node-exporter-wjkgj                    2/2     Running   0          74m
prometheus-adapter-5dd78c75c6-h2jf7    1/1     Running   0          58m
prometheus-adapter-5dd78c75c6-qpwzv    1/1     Running   0          58m
prometheus-k8s-0                       2/2     Running   0          74m
prometheus-k8s-1                       2/2     Running   0          74m
prometheus-operator-75d9b475d9-mmzgs   2/2     Running   0          80m
webhook-dingtalk-6ffc94b49-z9z6l       1/1     Running   0          61m
[root@lecode-k8s-master backsvc]# kubectl get svc -n monitoring 
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       NodePort    10.98.35.93     <none>        9093:30093/TCP               72m
alertmanager-operated   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   72m
blackbox-exporter       ClusterIP   10.109.10.110   <none>        9115/TCP,19115/TCP           72m
grafana                 NodePort    10.110.48.214   <none>        3000:30300/TCP               72m
kube-state-metrics      ClusterIP   None            <none>        8443/TCP,9443/TCP            72m
node-exporter           ClusterIP   None            <none>        9100/TCP                     72m
prometheus-adapter      ClusterIP   10.97.23.176    <none>        443/TCP                      72m
prometheus-k8s          ClusterIP   10.100.92.254   <none>        9090/TCP                     72m
prometheus-operated     ClusterIP   None            <none>        9090/TCP                     72m
prometheus-operator     ClusterIP   None            <none>        8443/TCP                     78m
webhook-dingtalk        ClusterIP   10.100.131.63   <none>        80/TCP                       72m

5.2 访问服务

暴露服务三种方法:用service资源的nodeport模式,或者用k8s的ingress暴露服务或者本地nginx代理。本地的nginx代理模式

这里我grafana用的是nodeport模式。Prometheus用的是nginx代理。附上nginx配置文件

[root@lecode-k8s-master setup]# cat /usr/local/nginx/conf/4-layer-conf.d/lecode-prometheus-operator.conf 
#代理prometheus内置Dashboard UI
upstream prometheus-dashboard {
    server 10.100.92.254:9090; #这里ip为prometheus-k8s svc资源的ip
}

server {
    listen  9090;
    proxy_pass prometheus-dashboard;
}

#代理grafana
upstream grafana {
    server 10.1.82.89:3000; #这里ip为grafana svc资源的ip
}

server {
    listen  3000;
    proxy_pass grafana;
}

访问Prometheus targets

5.3 接入grafana

访问grafana(默认密码是admin admin)

去grafana官网下载对应dashboard 地址:https://grafana.com/grafana/dashboards/

6. 监控集群外的服务

6.1 exporter安装

在对应服务的本地安装对应的exporter用于收集数据(这里以mysql为例)

#下载对应服务的exporter  
#插件下载地址:https://prometheus.io/docs/instrumenting/exporters/
#插件下载地址:https://prometheus.io/download/
#下载完成后解压mysqld_exporter-0.13.0.linux-amd64.tar.gz

#配置mysql-exporter
在root路径下,创建.my.cnf文件。内容如下:
[root@lecode-test-001 ~]# cat /root/.my.cnf 
[client]
user=mysql_monitor
password=Mysql@123



#创建mysql 用户并授权

CREATE USER ‘mysql_monitor’@‘localhost’ IDENTIFIED BY ‘Mysql@123’ WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON . TO ‘mysql_monitor’@‘localhost’;
FLUSH PRIVILEGES;
EXIT

#启动mysqld_exporter
[root@lecode-test-001 mysql-exporter]# nohup mysqld_exporter &
#找到对应的端口
[root@lecode-test-001 mysql-exporter]# tail -f nohup.out 
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:277 msg="Starting msqyld_exporter" version="(version=0.13.0, branch=HEAD, revision=ad2847c7fa67b9debafccd5a08bacb12fc9031f1)"
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:278 msg="Build context" (gogo1.16.4,userroot@e2043849cb1f,date20210531-07:30:16)=(MISSING)
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=global_status
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=global_variables
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=slave_status
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=info_schema.innodb_cmp
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=info_schema.innodb_cmpmem
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:293 msg="Scraper enabled" scraper=info_schema.query_response_time
level=info ts=2022-10-25T09:26:54.464Z caller=mysqld_exporter.go:303 msg="Listening on address" address=:9104 #这是exporter的端口
level=info ts=2022-10-25T09:26:54.464Z caller=tls_config.go:191 msg="TLS is disabled." http2=false
#检查端口
[root@lecode-test-001 mysql-exporter]# ss -lntup |grep 9104
tcp    LISTEN     0      128      :::9104                 :::*                   users:(("mysqld_exporter",pid=26115,fd=3))


6.2 K8S配置

创建endpoint资源关联对应服务主机的exporter端口。绑定service资源,通过ServiceMonitor资源添加Prometheus targets,

1)官方格式

 kubectl -n monitoring get prometheus kube-prometheus -o yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    app: prometheus
    chart: prometheus-0.0.14
    heritage: Tiller
    prometheus: kube-prometheus
    release: kube-prometheus
  name: kube-prometheus
  namespace: monitoring
spec:
  ...
  baseImage: quay.io/prometheus/prometheus
  serviceMonitorSelector:
    matchLabels:
      prometheus: kube-prometheus 

#接下来就是按照格式创建对应的ServiceMonitor资源

2) Endpoint

kind: Endpoints
apiVersion: v1
metadata:
  name: mysql-test
  namespace: monitoring
  labels:
    app.kubernetes.io/component: mysql-test
    app.kubernetes.io/name: mysql-test
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 8.1.1
subsets:
- addresses: 
  - ip: 192.168.1.17 # ip为安装exporter服务器的ip
  ports:
    - name: mysql
      port: 9104 # exporter的端口

创建自己的静态endpoint,我们提供了IPPort 以及只描述我们GPU服务的label: k8s-app: mysql-exporter

3) Service

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: mysql-test
    app.kubernetes.io/name: mysql-test
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 8.1.1
  name: mysql-test
  namespace: monitoring
spec:
  ports:
  - name: mysql
    port: 9104
  selector:
    app.kubernetes.io/component: mysql-test
    app.kubernetes.io/name: mysql-test
    app.kubernetes.io/part-of: kube-prometheus

4) ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/component: mysql-test
    app.kubernetes.io/name: mysql-test
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 8.1.1
  name: mysql-test
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: mysql
    tlsConfig:
      insecureSkipVerify: true
  selector:
    matchLabels:
      app.kubernetes.io/component: mysql-test
      app.kubernetes.io/name: mysql-test
      app.kubernetes.io/part-of: kube-prometheus

最重要的部分是label — 我们必须分配label: prometheus: kube-prometheus 因此Prometheus服务器将在matchlabel部分查找此目标和第二个标签,以便ServiceMonitor只指向我们的mysql-export

我们来apply所有:

 kubectl apply -f mysql-exporter-ep.yaml  \
          -f mysql-exporter-svc.yaml \
          -f mysql-exporter-sm.yaml

现在已经切换到Prometheus UI,如果我们看目标页面,我们应该看到我们的mysql-exporter在列表中(注意:域名后面加targets 就是exporter列表)

6.3 接入grafana

标签:rw,--,yaml,prometheus,2022,operator,K8S,root
From: https://www.cnblogs.com/anslinux/p/16830952.html

相关文章

  • 常见k8s面试
    一、自我介绍现住址、二、k8s网络这方面是怎么处理的。k8s中的pod是有生命周期的,也就是说,它可以在任何时间,死在任何node上。而新生的podip跟之前的pod的ip基......
  • K8S进阶篇-高级调度计划任务、污点和容忍、Affinity
    一、Job1Job可以干什么Job可以干什么?在容器启动或退出时做一些任务。Job可以并行执行。1、需要等待后执行的任务2、导入SQL文件3、创建用户、清理表........等等 ......
  • k8s-Service
    一、背景通过pod控制器Deployment创建的一组Pod来提供具有高可用性的服务。虽然每个Pod都会分配一个单独的PodIP,然而却存在如下两问题:pod重建后,pod的ip会发生变化po......
  • k8s如何调度pod
    选择节点步骤k8s默认的调度器是kube-scheduler,它会为新创建的pod且未被调度的pod选择最合适的节点。这个过程如下过滤:节点是否有足够的资源满足请求资源条件,满足条件的节点......
  • linux7系统搭建Prometheus+Grafana+Alertmanager监控平台
    一、环境准备1.系统   centos7.92.安装包下载​​https://prometheus.io/download/​​grafana官网下载:https://grafana.com/grafana/downloadalertmanager-0.23.0.......
  • HELM chart 部署mongodb 到k8s 集群 pod 无法解析dns 问题
    1,正常拉取bitbami的包部署mongodb到k8s集群,运行前一切正常2,部署到mongodb-1的时候,卡主,查看日志,arbiter报无法连接mongodb-0或者mongodb-headless 3,搜到早些年的issu......
  • K8s控制器
    一、Replicaset控制器概述   RS是kubernetes中的副本控制器,全称Replicaset,主要作用是控制由其管理的pod,使pod的副本数量始终维持在预设的个数。保证一定数量的po......
  • 【Kubernetes】K8s笔记(十三):PersistentVolume 解决数据持久化问题
    目录0.ConfigMap和Secret中的Volume1.PersistentVolumePersistentVolumeClaim和StorageClass2.使用YAML描述PersistentVolume3.使用YAML描述PersistentVol......
  • k8s将节点容器运行时从Docker迁移到Containerd
    1.执行drain操作kubectldraink8s-node01--ignore-daemonsets#2.对应节点上关闭docker#注意,是要迁移的节点systemctlstopkubeletsystemctlstopdocker.soc......
  • K8s nodePort、port、targetPort、hostPort
    转载:https://blog.csdn.net/chainsmoker_/article/details/1244498901.nodePort外部流量访问k8s集群中service入口的一种方式(另一种方式是LoadBalancer),即nodeIP:nodeP......