首页 > 其他分享 >elk日志采集系统

elk日志采集系统

时间:2023-07-27 17:35:03浏览次数:46  
标签:elk severity kibana yml 采集 elasticsearch address 日志 logstash

下载地址:https://elasticsearch.cn/download/

安装包:

mkdir elk
cd elk

 

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.5.1-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.5.1-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.5.1-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.5.1-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.5.1-x86_64.rpm
#wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v8.5.1/elasticsearch-analysis-ik-8.5.1.zip


 

安装中文分词器插件:

yum -y install ./*.rpm
git clone https://gitee.com/dev-chen/elasticsearch-analysis-ik.git
cp -r elasticsearch-analysis-ik/ /etc/elasticsearch/plugins/ik

 

elasticsearch下载记录

--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : Ows7ypw76KVJLXnrsR2r

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.

-------------------------------------------------------------------------------------------------
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service

 

[root@test-linux ~]# ls /etc/elasticsearch/
certs                   elasticsearch-plugins.example.yml  jvm.options    log4j2.properties  role_mapping.yml  users
elasticsearch.keystore  elasticsearch.yml                  jvm.options.d  plugins            roles.yml         users_roles
[root@test-linux ~]# ls /etc/logstash/
conf.d  jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
[root@test-linux ~]# ls /etc/filebeat/
fields.yml  filebeat.reference.yml  filebeat.yml  modules.d
[root@test-linux ~]# ls /etc/kibana/
kibana.keystore  kibana.yml  node.options
[root@test-linux ~]#

 elasticserach配置文件,注意此处的ssl配置,一旦启用ssl后面logstash和kibana的对接也得用https

# more  /etc/elasticsearch/elasticsearch.yml 
#::::::::::::::
#/etc/elasticsearch/elasticsearch.yml
#::::::::::::::
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
xpack.security.enabled: false
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12
xpack.security.transport.ssl:
  enabled: false
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["ELK"]
http.host: 0.0.0.0
#::::::::::::::

#::::::::::::::


server.host为本机实际IP,ss -ntulp|grep 5601可以看到,此地址为对外通信的地址,如果写成回环地址无法继续访问

#more/etc/kibana/kibana.yml
::::::::::::::
server.port: 5601
server.host: "192.168.202.11"
server.name: "ELK"

elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "I3ZK+81kPWaEuJwVy=2k"

logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file

i18n.locale: "zh-CN"

pid.file: /run/kibana/kibana.pid

filebeat配置文件,对接logstash

#more /etc/filebeat/filebeat.yml
#::::::::::::::
filebeat.inputs:
- type: log
  id: 1
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/*/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
#output.elasticsearch:
#  hosts: ["192.168.202.11:9200"]
output.logstash:
  hosts: ["192.168.202.11:5044"]
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
#::::::::::::::

 

logstash配置文件,t对接filebeat作为input

#more /etc/logstash/conf.d/logstash-localhost.conf
#::::::::::::::
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://127.0.0.1:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "Vg0Xd-s9XHokOfIqlxMe"
  }
}
#::::::::::::::

 

#more /etc/logstash/conf.d/logstash-switch.conf
#::::::::::::::
input{
    tcp { port => 5002
    type => "Cisco"}
    udp { port => 514
    type => "HUAWEI"}
    udp { port => 5002
    type => "Cisco"}
    udp { port => 5003
    type => "H3C"}
}
filter {
    if [type] == "Cisco" {
    grok {
    match => { "message" => "<%{BASE10NUM:syslog_pri}>%{NUMBER:log_sequence}: .%{SYSLOGTIMESTAMP:timestamp}: %%{DATA:facility}-%{P
OSINT:severity}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:message}" }
    match => { "message" => "<%{BASE10NUM:syslog_pri}>%{NUMBER:log_sequence}: %{SYSLOGTIMESTAMP:timestamp}: %%{DATA:facility}-%{PO
SINT:severity}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:message}" }
    add_field => {"severity_code" => "%{severity}"}
    overwrite => ["message"]
    }
}
    elseif [type] == "H3C" {
    grok {
    match => { "message" => "<%{BASE10NUM:syslog_pri}>%{SYSLOGTIMESTAMP:timestamp} %{YEAR:year} %{DATA:hostname} %%%{DATA:vvmodule
}/%{POSINT:severity}/%{DATA:digest}: %{GREEDYDATA:message}" }
    remove_field => [ "year" ]
    add_field => {"severity_code" => "%{severity}"}
    overwrite => ["message"]
    }
}
        elseif [type] == "HUAWEI" {
    grok {
       match => { "message" => "<%{BASE10NUM:syslog_pri}>%{SYSLOGTIMESTAMP:timestamp} %{DATA:hostname} %%%{DATA:ddModuleName}/%{PO
SINT:severity}/%{DATA:Brief}:%{GREEDYDATA:message}"}
       match => { "message" => "<%{BASE10NUM:syslog_pri}>%{SYSLOGTIMESTAMP:timestamp} %{DATA:hostname} %{DATA:ddModuleName}/%{POSI
NT:severity}/%{DATA:Brief}:%{GREEDYDATA:message}"}
       remove_field => [ "timestamp" ]
    add_field => {"severity_code" => "%{severity}"}
    overwrite => ["message"]
    }
}
#mutate {
#        gsub => [
#        "severity", "0", "Emergency",
#        "severity", "1", "Alert",
#        "severity", "2", "Critical",
#        "severity", "3", "Error",
#        "severity", "4", "Warning",
#        "severity", "5", "Notice",
#        "severity", "6", "Informational",
#        "severity", "7", "Debug"
#        ]
#    }
}
output{
    stdout {
       codec => rubydebug
}
    elasticsearch {
        index =>
        "syslog-%{+YYYY.MM.dd}"
        hosts => ["http://127.0.0.1:9200"]
        user => "elastic"
        password => "Vg0Xd-s9XHokOfIqlxMe"
    }
}
[root@ELK ~]#
#more /etc/logstash/conf.d/logstash-snmp.conf.template
#::::::::::::::
input {
 snmp {
   interval => 60
   hosts => [{host => "udp:192.168.0.249/161" community => "public"},
             {host => "udp:192.168.20.253/161" community => "public"},
             {host => "udp:192.168.12.253/161" community => "public"},
             {host => "udp:192.168.0.241/161" community => "public"}
             ]
   tables => [{"name"=> "mac-address" "columns"=> ["1.3.6.1.2.1.17.4.3.1.1","1.3.6.1.2.1.17.4.3.1.2"] },
              {"name"=> "arp-address" "columns"=>["1.3.6.1.2.1.4.22.1.1","1.3.6.1.2.1.4.22.1.2","1.3.6.1.2.1.4.22.1.3"]}]
     }
}
filter {
clone {
clones => [event]
add_field => { "clone" => "true" }
}

if [clone] { mutate {remove_field => ["mac-address"] }}
else { mutate { remove_field => ["arp-address"]}}

if [mac-address] {
split { field => "mac-address" }
mutate {
rename => { "[mac-address][iso.org.dod.internet.mgmt.mib-2.dot1dBridge.dot1dTp.dot1dTpFdbTable.dot1dTpFdbEntry.dot1dTpFdbAddress]"
 => "MACaddress"}
rename => { "[mac-address][iso.org.dod.internet.mgmt.mib-2.dot1dBridge.dot1dTp.dot1dTpFdbTable.dot1dTpFdbEntry.dot1dTpFdbPort]" =>
 "FDBPort"}
remove_field => ["mac-address"]
add_field => {"cmdbtype" => "MACtable"}
}
elasticsearch {
hosts =>["192.168.202.11:9200"]
index => "nhserear-snmpfdbtable-2021.01.20"
query =>"fdbport:%{[FDBPort]} AND host:%{[host]}"
fields => { "ifDescr" => "ifDescr" }
}

}
if [arp-address] {
split { field => "arp-address" }
mutate {
rename => { "[arp-address][iso.org.dod.internet.mgmt.mib-2.ip.ipNetToMediaTable.ipNetToMediaEntry.ipNetToMediaIfIndex]" => "ifInde
x"}
rename => { "[arp-address][iso.org.dod.internet.mgmt.mib-2.ip.ipNetToMediaTable.ipNetToMediaEntry.ipNetToMediaNetAddress]" => "IPa
ddress"}
rename => { "[arp-address][iso.org.dod.internet.mgmt.mib-2.ip.ipNetToMediaTable.ipNetToMediaEntry.ipNetToMediaPhysAddress]" => "MA
Caddress"}
remove_field => ["arp-address"]
add_field => {"cmdbtype" => "ARPtable"}
}
elasticsearch {
hosts =>["192.168.202.11:9200"]
index => "nhserear-snmpiftable-2021.01.20"
query =>"ifIndex:%{[ifIndex]} AND host:%{[host]}"
fields => { "ifDescr" => "ifDescr" }
}

}
}

output {
 elasticsearch{
 hosts=> ["192.168.202.11:9200"]
 index=> "nhserear-snmp-%{+YYYY.MM.dd}"
}
}
#::::::::::::::

 

标签:elk,severity,kibana,yml,采集,elasticsearch,address,日志,logstash
From: https://www.cnblogs.com/santia-god/p/17584425.html

相关文章

  • docker run 日志
    Docker运行日志详解Docker是一个开源的容器化平台,它允许开发者将应用程序及其依赖项打包成一个独立的、可移植的容器,以实现快速部署和跨平台运行。在使用Docker时,了解和分析容器的运行日志是非常重要的。本文将介绍如何使用dockerrun命令来查看容器的日志,并提供一些常用的技巧和......
  • FastPrint开发日志
    2023.7.271.MEF插件框架   DirectoryCatalog扫描指定目录路径中的DLL文件时不能识别带'.'的DLL例如:ICSharpCode.TextEditor.dll2.关于System.Data.SQLite.dll 如果一个.NET应用要自适应32位/64位系统,只需要在项目的“目标平台”设置为“AnyCPU” 但是Syste......
  • 使用zap接收gin框架默认的日志并配置日志归档
    使用zap接收gin框架默认的日志并配置日志归档本文介绍了在基于Gin框架开发的项目中如何配置并使用zap来接收并记录gin框架默认的日志和如何配置日志归档。我们基于gin框架开发项目时通常都会选择使用专业的日志库来记录项目的日志,go语言常用的日志库有zap、logrus等。网上也有很......
  • python 日志
    #coding:utf-8importloggingimportsysimportosimportdatetimefromloggingimporthandlersclassLogger(object):def__init__(self,name):LOGGING_TO_CONSOLE=TrueLOGGING_LEVEL=logging.DEBUGLOGGING_FORMATTER='%(asc......
  • 振弦采集仪及在线监测系统完整链条的岩土工程隧道安全监测
    振弦采集仪及在线监测系统完整链条的岩土工程隧道安全监测近年来,随着城市化的不断推进和基础设施建设的不断发展,隧道建设也日益成为城市交通发展的必需品。然而,隧道建设中存在着一定的安全隐患,如地质灾害、地下水涌流等,因此隧道工程的安全监测显得尤为重要。 振弦采集仪及在......
  • 轻量级日志系统Loki--安装配置详细步骤讲解
    Loki对标EFK/ELK,由于其轻量的设计,备受欢迎,Loki相比EFK/ELK,它不对原始日志进行索引,只对日志的标签进行索引,而日志通过压缩进行存储,通常是文件系统存储,所以其操作成本更低,数量级效率更高由于Loki的存储都是基于文件系统的,所以它的日志搜索时基于内容即日志行中的文本,所以它的查询支......
  • 在GO语言中项目中使用zap日志库
    在GO语言中项目中使用zap日志库本文先介绍了GO语言中原生的日志库中使用,然后想继续介绍了非常流行的Uber开源的zap日志库,同时介绍了如何搭配Lumberjack实现日志的切割和归档。在GO语言项目中使用zap日志库介绍在许多go语言项目中,我们需要一个好的日志记录器能够提供下面这些功......
  • 日志的分类
    1.重做日志(REDOLOG):记录所有事务,无论是否提交,用于数据恢复。2.回滚日志(UNDOLOG):用于数据撤回操作,实现MVCC。3.慢查询日志(slowquerylog):日志查询优化4.通用查询日志(generalquerylog):记录数据操作5.错误日志(errorlog):记录Mysql服务的错误6.二进制日志(binlog):记录所有更改语......
  • USB图像采集卡是什么
    USB图像采集卡是一种用于连接计算机和摄像机、摄像头或其他视频设备的设备。它充当了一个接口,使得视频信号可以通过USB接口传输到计算机,并且可以进行图像采集和处理,如实时监控、视频录制、图像分析等。USB图像采集卡的出现极大地方便了用户在计算机上进行视频采集和处理的需求。......
  • Python采集主播照片,实现人脸识别, 进行颜值评分,制作颜值排行榜
    昨晚一回家,表弟就神神秘秘的跟我说,发现一个高颜值网站,非要拉着我研究一下她们的颜值高低。我心想,这还得要我一个个慢慢看,太麻烦了~于是反手用Python给他写了一个人脸识别代码,把她们的照片全部爬下来,自动检测颜值打分排名。这不比手动快多了?准备工作开发环境Py......