一、relabel简介
为了更好的识别监控指标,便于后期调用数据绘图、告警等需求,prometheus支持对发现的目标进行label修改,可以在目标被抓取之前动态重写目标的标签集。每个抓取配置可以配置多个重新标记步骤。它们按照它们在配置文件中出现的顺序应用于每个目标的标签集。
除了配置的每个目标标签之外,prometheus还会自动添加几个标签:
job标签:设置为job_name相应的抓取配置的值。
instance标签:__address__设置为目标的地址<host>:<port>
。重新标记后,如果在重新标记期间未设置标签,则默认将__address__标签值赋值给instance。
__schema__:协议类型
__metrics_path:抓取指标数的url
__scrape_interval__:scrape抓取数据时间间隔(秒)
__scrape_timeout__:scrape超时时间(秒)
__meta_
在重新标记阶段可能会提供带有前缀的附加标签。它们由提供目标的服务发现机制设置,并且因机制而异。
__
目标重新标记完成后,将从标签集中删除以开头的标签。
如果重新标记步骤只需要临时存储标签值(作为后续重新标记步骤的输入),可以使用__tmp
标签名称前缀。这个前缀保证不会被 Prometheus 本身使用。
常用的在以下两个阶段可以重新标记:
relabel_configs:在采集之前(比如在采集数据之前重新定义元标签),可以使用relabel_configs添加一些标签、也可以只采集特定目标或过滤目标
metric_relabel_configs:如果是已经抓取到指标数据时,可以使用metric_relabel_configs做最后的重新标记和过滤
二、relabel_configs配置
source_labels:源标签,没有经过relabel处理之前的标签名字
target_labels:通过relabel处理之后的标签名字
separator:源标签的值的连接分隔符。默认是";"
module:取源标签值散列的模数
regex:正则表达式,匹配源标签的值。默认是(.*)
replacement:通过分组替换后标签(target_label)对应的值。默认是$1
action:根据正则表达式匹配执行的动作。默认是replace
- replace:替换标签值,根据regex正则匹配到原标签值,使用replacement来引用表达式匹配的分组
- keep:满足regex正则条件的实例进行采集,把source_labels中没有匹配到regex正则内容的target实例丢掉,即只采集匹配成功的实例
- drop:满足regex正则条件的实例不采集,把source_labels中没有匹配到regex正则内容的target实例丢掉,即只采集没有匹配成功的实例
- hashmod: 使用hashmod计算source_labels的hash值并进行对比,基于自定义的模数取模,以实现对目标进行分类、重新赋值等功能
- labelmap: 匹配regex所有标签名称,然后复制匹配标签的值进行分组,通过replacement分组引用($1,$2,...)替代
- labeldrop: 匹配regex所有标签名称,对匹配到的实例标签进行删除
- labelkeep: 匹配regex所有标签名称,对匹配到的实例标签进行保留
三、常用的action案例
在开始之前准备prometheus配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
|
查看target
因为我们的label都是以__开头的,目标重新标签之后,以__开头的标签将从标签集中删除的。
3.1、replace
将labels中的__hostname__替换为node_name
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- source_labels:
- "__hostname__"
regex: "(.*)"
target_label: "node_name"
action: replace
replacement: $1
|
重启服务查看target信息如下
说下上面的配置: source_labels指定我们需要处理的源标签, target_labels指定了我们要replace后的标签名字, action指定relabel动作,这里使用replace替换动作。 regex去匹配源标签(__hostname__)的值,"(.*)"代表__hostname__这个标签是什么值都匹配的,然后replacement指定的替换后的标签(target_label)对应的数值。采用正则引用方式获取的。
3.2、keep
只要source_labels的值匹配node01的实例才会被采集数据,其他实例不会采集
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- source_labels:
- "__hostname__"
regex: "node01"
action: keep
|
target如下图
3.3、drop
在keep的基础上把action修改为drop
target如下图
action为drop,其实和keep是相似的, 不过是相反的, 只要source_labels的值匹配regex(node01)的实例不会被采集。 其他的实例会被采集。
3.4、labelmap
所有被regex __(.*)__匹配到的标签名,复制匹配到的标签的值,通过replacement分组引用$1,然后创建匹配到的标签集
修改配置如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- regex: "__(.*)__"
action: labelmap
|
查看target
3.5、labelkeep
先给每个实例打上几个label,配置文件如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- source_labels:
- "__hostname__"
regex: (.*)
target_label: hostname
action: replace
replacement: $1
- source_labels:
- "__region_id__"
regex: (.*)
target_label: region_id
action: replace
replacement: $1
- source_labels:
- "__zone__"
regex: (.*)
target_label: zone
action: replace
replacement: $1
|
查看target如下
添加labelkeep配置,保留正则__.*__|job匹配到的标签,没有匹配到的则删除
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- source_labels:
- "__hostname__"
regex: (.*)
target_label: hostname
action: replace
replacement: $1
- source_labels:
- "__region_id__"
regex: (.*)
target_label: region_id
action: replace
replacement: $1
- source_labels:
- "__zone__"
regex: (.*)
target_label: zone
action: replace
replacement: $1
- action: labelkeep
regex: "__.*__|job"
|
查看target,只有instance和job标签,instance标签是prometheus自动生成的
3.6、labeldrop
修改labeldrop配置,删除正则region|zone 匹配到的标签,不匹配的保留
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: [ "localhost:9090" ]
- job_name: "nodes"
static_configs:
- targets:
- 192.168.88.201:9100
labels:
__hostname__: node01
__region_id__: "shanghai"
__zone__: a
- targets:
- 192.168.88.202:9100
labels:
__hostname__: node02
__region_id__: "beijing"
__zone__: b
relabel_configs:
- source_labels:
- "__hostname__"
regex: (.*)
target_label: hostname
action: replace
replacement: $1
- source_labels:
- "__region_id__"
regex: (.*)
target_label: region_id
action: replace
replacement: $1
- source_labels:
- "__zone__"
regex: (.*)
target_label: zone
action: replace
replacement: $1
- action: labeldrop
regex: "region_id|zone"
|
查看target,region和zone标签已被删除,但hostname标签还在
TRANSLATE with x English TRANSLATE with COPY THE URL BELOW Back EMBED THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster Portal Back 标签:__,regex,简介,labels,relabel,job,标签,configs From: https://www.cnblogs.com/gaoyuechen/p/18058860