首页 > 其他分享 >otlp采集数据的虚拟机环境配置

otlp采集数据的虚拟机环境配置

时间:2024-04-12 15:59:02浏览次数:23  
标签:otlp nginx 0.0 虚拟机 采集 prometheus loki usr local

采集+监控

1.LB

LB 配置文件,nginx自带的ngx_http_stub_status_module提供的/nginx_status(可自定义命名)端点输出的是nginx自己的简单状态信息

vim InforSuiteLB/conf/InforSuiteLB.conf
location /nginx_status {
            stub_status on;
           # access_log   off;
           # allow 127.0.0.1;
           # deny all;
        }

配置好启动LB即可

2.metrics

1.Nginx Prometheus Exporter

解压tar包,创建systemctl启动命令

# 解压
tar -zxvf nginx-prometheus-exporter_1.1.0_linux_amd64.tar.gz

vim /etc/systemd/system/nginx-prometheus-exporter.service
[Unit]
Description=nginx-prometheus-exporter
Documentation=https://github.com/nginxinc/nginx-prometheus-exporter
After=network.target

[Service]
Type=simple
User=root
ExecStart= /usr/local/nginx-prometheus-exporter \
-web.listen-address=:9113 \
-nginx.scrape-uri=http://192.168.209.132:80/nginx_status
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动Nginx Prometheus Exporter

# 重新加载服务文件
systemctl daemon-reload
# 设置开机自启
systemctl enable nginx-prometheus-exporter.service
# 启动exporter
systemctl start nginx-prometheus-exporter.service
# 查看exporter状态
systemctl status nginx-prometheus-exporter.service

Nginx Prometheus Exporternginx暴露出来的指标转为Prometheus可接收的metrics格式,供其他组件收集或拉取

2.prometheus 数据源

# 1 进入安装目录
cd /usr/local
# 2 下载安装包
wget https://github.com/prometheus/prometheus/releases/download/v2.42.0/prometheus-2.42.0.linux-amd64.tar.gz
# 3 解压
tar -zxvf prometheus-2.42.0.linux-amd64.tar.gz
# 4 重命名
mv prometheus-2.42.0.linux-amd64 prometheus

配置开机自启动

vim /usr/lib/systemd/system/prometheus.service
[Unit]
Description=Prometheus
After=network.target
Documentation=https://prometheus.io/

[Service]
Type=simple
ExecStart=/usr/local/prometheus/prometheus --config.file=/usr/local/prometheus/prometheus.yml --storage.tsdb.path=/usr/local/prometheus/data --web.listen-address=:9090 --web.enable-lifecycle
Restart=on-failure

[Install]
WantedBy=multi-user.target

配置文件

vi prometheus/prometheus.yml
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).



# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"


remote_write:
  - url: "http://192.168.209.132:4317/api/v1/write"
remote_read:
  - url: "http://192.168.209.132:4317/api/v1/read"



# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.209.132:9090"]

  - job_name: 'nginx-stub-status'
    static_configs:
      - targets: ['192.168.209.132:1234']  # otelcol的地址和端口

启动

# 重新加载服务文件
systemctl daemon-reload
# 设置开机自启
systemctl enable prometheus
# 启动prometheus
systemctl start prometheus
# 查看prometheus状态
systemctl status prometheus

# 查看服务是否启动
lsof -i:9090

3.logs

1.loki

下载

mkdir /usr/local/loki

###下载二进制包
wget "https://github.com/grafana/loki/releases/download/v2.7.4/loki-linux-amd64.zip"
###解压二进制包
unzip "loki-linux-amd64.zip"
### make sure it is executable
chmod a+x "loki-linux-amd64"

配置

vim loki-local-config.yml
auth_enabled: false
 
server:
  http_listen_port: 3100
  grpc_listen_port: 9096
 
ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 10m
  chunk_retain_period: 30s
schema_config:
  configs:
  - from: 2020-05-15
    store: boltdb
    object_store: filesystem
    schema: v11
    index:
      prefix: index_
      period: 168h
storage_config:
  boltdb:
    directory: /usr/local/loki/index
  filesystem:
    directory: /usr/local/loki/chunks  # 块存储路径
 
limits_config:
  enforce_metric_name: false
  reject_old_samples: true          # 是否拒绝老样本
  reject_old_samples_max_age: 168h  # 168小时之前的样本将会被删除
  ingestion_rate_mb: 200
  ingestion_burst_size_mb: 300
  per_stream_rate_limit: 1000MB
  max_entries_limit_per_query: 10000
chunk_store_config:
  max_look_back_period: 168h        # 为避免查询超过保留期的数据,必须小于或等于下方的时间值
table_manager:
  retention_deletes_enabled: true   # 保留删除开启
  retention_period: 168h            # 超过168h的块数据将被删除
 
ruler:
  storage:
    type: local
    local:
      directory: /usr/local/loki/rules
  rule_path: /usr/loca/loki/rules-temp
  alertmanager_url: http://192.168.209.132:9093    # alertmanager地址
  ring:
    kvstore:
      store: inmemory
  enable_api: true
  enable_alertmanager_v2: true

启动文件

vim restart-loki.sh

配置

#!/bin/bash
echo "stop loki"
ps -ef | grep loki-linux-amd64 | grep -v grep | awk '{print $2}'| xargs kill -9 
 
echo "Begin start loki"
sleep 1
str=$"\n"
nohup ./loki-linux-amd64 --config.file=loki-local-config.yml &
sstr=$(echo -e $str)
echo $sstr
### 增加执行权限
chmod +x restart-loki.sh
### 启动
cd /usr/local/loki
./restart-loki.sh

2.日志代理 Promtail

下载

mkdir /usr/local/promtail

###下载二进制包
wget "https://github.com/grafana/loki/releases/download/v2.7.4/promtail-linux-amd64.zip"
###解压二进制包
unzip promtail-linux-amd64
### make sure it is executable
chmod a+x "promtail-linux-amd64"

配置

vim promtail-local-config.yml
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /usr/local/promtail/positions.yaml

clients:
  - url: http://192.168.209.132:3100/loki/api/v1/push # 填写好Loki地址

scrape_configs:
- job_name: nginx
  pipeline_stages:
  - replace:
      expression: '(?:[0-9]{1,3}\.){3}([0-9]{1,3})'
      replace: '***'
  static_configs:
  - targets:
      - localhost
    labels:
      job: nginx_access_log
      host: appfelstrudel
      agent: promtail
      __path__: /usr/local/InforSuiteLB/logs/json_access.log

启动文件

vi restart-promtail.sh

配置

#!/bin/bash
echo "Begin stop promtail"
ps -ef | grep promtail-linux-amd64 | grep -v grep | awk '{print $2}' | xargs kill -9
 
echo "Begin start promtail...."
nohup ./promtail-linux-amd64 --config.file=promtail-local-config.yml > ./promtail-9080.log 2>&1 &
### 增加执行权限
chmod +x restart-promtail.sh
### 启动
cd /usr/local/promtail
./restart-promtail.sh

4.OpenTelemetry Collector

rpm安装

rpm -ivh otelcol_0.94.0_linux_amd64.rpm

配置文件

vim /etc/otelcol/config.yaml
# To limit exposure to denial of service attacks, change the host in endpoints below from 0.0.0.0 to a specific network interface.
# See https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks

extensions:
  health_check:
  pprof:
    endpoint: 0.0.0.0:1777
  zpages:
    endpoint: 0.0.0.0:55679

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

  opencensus:
    endpoint: 0.0.0.0:55678

  # Collect own metrics
  prometheus:
    config:
      scrape_configs:
      - job_name: 'nginx-stub-status'
        scrape_interval: 10s
        static_configs:
        - targets: ['192.168.209.132:9113']
          labels:
            job: 'nginx-stub-status'


  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_binary:
        endpoint: 0.0.0.0:6832
      thrift_compact:
        endpoint: 0.0.0.0:6831
      thrift_http:
        endpoint: 0.0.0.0:14268

  zipkin:
    endpoint: 0.0.0.0:9411


processors:
  batch:

exporters:
  debug:
    verbosity: detailed
  prometheus/metrics:
    endpoint: "192.168.209.132:1234"


service:

  pipelines:

    traces:
      receivers: [otlp, opencensus, jaeger, zipkin]
      processors: [batch]
      exporters: [debug]

    metrics:
      receivers: [otlp, opencensus, prometheus]
      processors: [batch]
      exporters: [debug, prometheus/metrics]

    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug]


  extensions: [health_check, pprof, zpages]

启动服务

# 重新加载服务文件
systemctl daemon-reload
# 设置开机自启
systemctl enable otelcol
# 启动grafana
systemctl start otelcol
# 查看grafana状态
systemctl status otelcol

5.grafana 安装

# 1 进入安装目录
cd /usr/local
# 2 下载安装包
wget https://dl.grafana.com/oss/release/grafana-9.4.3.linux-amd64.tar.gz
# 3 解压
tar -zxvf grafana-9.4.3.linux-amd64.tar.gz
# 4 重命名
mv grafana-9.4.3 grafana

配置开机自启动

# 创建grafana.service文件
vim /usr/lib/systemd/system/grafana.service
[Unit]
Description=Grafana
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/grafana/bin/grafana-server -homepath /usr/local/grafana
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动

# 重新加载服务文件
systemctl daemon-reload
# 设置开机自启
systemctl enable grafana
# 启动grafana
systemctl start grafana
# 查看grafana状态
systemctl status grafana

# 查看服务是否启动
lsof -i:3000

配置中文

default_language = zh-Hans

启动后,打开前端(3000端口)配置数据源、仪表盘(nginx指标导入模板ID: 11199,nginx日志导入模板ID: 12559)

6.结果展示

image-20240221141650780image-20240221141710175

image-20240222163954219

image-20240222164008692

标签:otlp,nginx,0.0,虚拟机,采集,prometheus,loki,usr,local
From: https://www.cnblogs.com/hmk7710/p/18131468

相关文章

  • vmware虚拟机安装CentOS 7.9
    为.net6在CentOS7上面做准备,先在vmware虚拟机安装CentOS7.9新建CentOS764位的系统因为CentOS8不更新了,所以安装7;简单就一笔带过了  选择下载好的操作系统的iso文件,下载地址https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/?spm=a2c6h.25603864.0.0.1f90f5adDfcZ......
  • vmware安装macos提示客户机操作系统已禁用 CPU。请关闭或重置虚拟机
    客户机操作系统已禁用CPU。请关闭或重置虚拟机。这是AMD电脑的VMware安装macOS出现的错误我们需要在虚拟机运行之前打开虚拟机安装目录自动生成的macOSxxxx(你选择安装的版本号).vmx只需要在末尾添加:smc.version="0"cpuid.0.eax="0000:0000:0000:0000:0000:0000:0000:......
  • VMware虚拟机迁移到PVE
    VMware虚拟机迁移到PVEpve7.4https://blog.csdn.net/o12345612345666885/article/details/129679746从vmware导出虚拟机,导出为ovf上传ovf文件和vmdk磁盘到PVE后台(不要修改vmdk磁盘名称)根据ovf文件和vmdk磁盘创建虚拟机qmimportovf102centos7.ovfsatasdb--formatqco......
  • 虚拟机-Linux开发板交叉编译问题记录
    遇到一堆很久之前见过的问题,重新解决一次。1、虚拟机没法上网发现虚拟机浏览器上不了网,运行ifconfig查看,发现要么没有IP地址,要么只有IPv6的地址。最后发现是昨天VMware卡死了,启动任务管理器把相关任务全停了,dhcp服务没启动。于是点进计算机-管理-服务,重新启动。再把网络设置成NA......
  • 虚拟机windows7创建共享文件夹
    我们在桌面新建一个文件夹,最好重命名成英文或者数字的名称,我这里命名为“fix”。接下来,右键文件夹-属性-共享-高级共享-勾选“共享此文件夹”。开启共享此文件夹点击,权限-勾选允许完全控制-应用-确定-应用-确定-确定。给予读写权限接着我们查看虚拟机的IPv4地址,虚拟机的网络连......
  • 不同虚拟机之间的docker容器互相访问
    虚拟机平台VMwareWrokStationPro16虚拟机环境Unbuntu22.04目标:在VM2中创建1个bridge虚拟网络,在VM2中,创建容器x,并能访问VM1中的容器,在两个虚拟机中创建birdge网络,虚拟机中的容器可以互相访问解决方案:使用overlay来实现不同虚拟机中的容器的互相访问在虚拟机1使用命令do......
  • stm32采集烟雾和温湿度+ESP8266转发解析+python构造http
      https://www.cnblogs.com/gooutlook/p/16061136.html  http://192.168.1.103/Control_SensorPin?sensor=sensor_all&action=GetDatapython#-*-coding:utf-8-*-importrequestsimporturllib.parse#pipinstallrequestsdefSendHttp():#ht......
  • 岩土工程监测中振弦采集仪与传统测量仪器的比较研究
    岩土工程监测中振弦采集仪与传统测量仪器的比较研究岩土工程监测是确保土地和建筑物安全的重要环节,并且也是工程建设过程中必不可少的一部分。在岩土工程监测中,采集仪器的选择对于监测结果的准确性和效率起着重要的作用。本文将重点比较振弦采集仪与传统测量仪器在岩土工程监测中......
  • Linux虚拟机扩展磁盘空间
    VM下的Linux虚拟机提示磁盘空间不足,需要对其进行磁盘扩容,主要有以下两步:一、在VM上进行扩展新的磁盘空间先关闭虚拟机在VM的虚拟机设置处进行硬盘扩展二、进入虚拟机将扩展的磁盘空间分配给对应的分区在VM的设置中分区之后,还需要进入Linux虚拟机,将扩展的磁盘空间分配给对......
  • 抖音评论采集|视频评论批量提取下载工具
    大家好,随着抖音平台的蓬勃发展,现在视频评论都已成为了个人和企业拓展客户群体、了解用户喜好的重要途径之一。现在为了帮助您更高效地采集并分析抖音视频评论,推出了全新版本的抖音视频评论批量采集工具,为您的营销策略提供强有力的支持!......