概述
ELK日志分析平台是指Elasticsearch、Logstash 和 Kibana 三个项目的集合,后面又增加了Filebeat数据采集器。
- Elasticsearch是一个数据搜索分析引擎。
- Logstash 是日志数据处理的管理和日志采集器,能从各个客户端/业务系统采集,转换,发送到Elasticsearch。
- Kibana 则是数据图形/报表的可视化展示。
- Filebeat 是比Logstash更轻巧的数据采集器
一般而言,日志数据流采用一下两种方案
- Filebeat负责数据采集日志,然后转发到Logstash进行格式处理并发送到Elasticsearch进行存储,然后管理员通过Kibana实现数据可视化。
-
Filebeat负责数据采集日志,然后进行格式处理并发送到Elasticsearch进行存储,然后管理员通过Kibana实现数据可视化。
image.png
本篇文章将对方案二的搭建以及使用进行讲解。
目的
- 项目线上出现错误,快速定位问题。
- 项目节点过多,日志太分散,ELK可以统一管理日志。
- 方便在大量日志文件中精准搜索关键字
- 解决开发人员无服务器权限需要查看日志的痛点
安装
git clone https://github.com/deviantony/docker-elk.git # 我下载的是8.6.2版本 docker-compose up #启动容器 只会启动 elasticsearch、kibana、logstash
默认用户:elastic
默认密码:changeme
kibana UI : http://localhost:5601
elasticsearch: http://localhost:9200/详情可以查看.env 文件
![](http://upload-images.jianshu.io/upload_images/12344097-6cfd70f1a76dc0eb.png?imageMogr2/auto-orient/strip|imageView2/2/w/1031/format/webp)
设置用户名密码
输入默认账号密码登录kibana
![](http://upload-images.jianshu.io/upload_images/12344097-88fb2850a16b1f39.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
步骤一
![](http://upload-images.jianshu.io/upload_images/12344097-a29cd82f79d3d54c.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
步骤二
![](http://upload-images.jianshu.io/upload_images/12344097-58d037c2a5436bd4.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-fee5855ee0f53587.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
汉化kibana
在 /kibana/config/kibana.yml 文件最后添加:i18n.locale: "zh-CN"
![](http://upload-images.jianshu.io/upload_images/12344097-a881c89d77dd2c32.png?imageMogr2/auto-orient/strip|imageView2/2/w/886/format/webp)
配置filebeat 通过filebeat采集日志输出到elasticsearch
/extensions/filebeat/config/filebeat.yml,
注意里面的账号密码需要在kibana预先设置好
setup.kibana: #kibanaIP地址 host: "http://kibana:5601" username: "filebeat_internal" password: ${FILEBEAT_INTERNAL_PASSWORD}
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/*.log # 这个路径是需要收集的日志路径,是docker容器中的路径
fields:
type: "testing" # 日志标签,区别不同日志,下面建立索引会用到
fields_under_root: true
encoding: utf-8 # 指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的
multiline.pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true # 是否需要对pattern条件转置使用,不翻转设为true,反转设置为false。 【建议设置为true】
multiline.match: after # 匹配pattern后,与前面(before)还是后面(after)的内容合并为一条日志processors:
- drop_fields:
去除多余字段
fields: ["agent.type","agent.name", "agent.version","log.file.path","log.offset","input.type","ecs.version","host.name","agent.ephemeral_id","agent.hostname","agent.id","_id","_index","_score","_suricata.eve.timestamp","agent.hostname","cloud. availability_zone","host.containerized","host.os.kernel","host.os.name","host.os.version"]
output.elasticsearch:
hosts: [ http://elasticsearch:9200 ]
indices:
#索引名称,一般为 ‘服务名称+ip+ --%{+yyyy.MM.dd}’。
- index: "testing-%{+yyyy.MM.dd}"
when.contains:
#标签,对应日志和索引,和上面对应
type: "testing"
username: filebeat_internal
password: ${FILEBEAT_INTERNAL_PASSWORD}
![](http://upload-images.jianshu.io/upload_images/12344097-96a566fa2c16278c.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
映射日志目录文件
建立/logs目录,并写入一写测试的日志文件
/extensions/filebeat/filebeat-compose.yml
![](http://upload-images.jianshu.io/upload_images/12344097-a312c7f8575dd54e.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
启动容器
# 同时启动elasticsearch、kibana、logstash、filebeat
docker-compose -f docker-compose.yml -f extensions/filebeat/filebeat-compose.yml up
配置工作空间
步骤一创建工作区
![](http://upload-images.jianshu.io/upload_images/12344097-f472f0c4b379b5d4.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
切换工作区
![](http://upload-images.jianshu.io/upload_images/12344097-d970a8bad9ae4c9d.png?imageMogr2/auto-orient/strip|imageView2/2/w/961/format/webp)
步骤二
![](http://upload-images.jianshu.io/upload_images/12344097-a3a5d4bf8000edc3.png?imageMogr2/auto-orient/strip|imageView2/2/w/708/format/webp)
步骤三
![](http://upload-images.jianshu.io/upload_images/12344097-be5f57f5161ac8e4.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
步骤四
![](http://upload-images.jianshu.io/upload_images/12344097-cf4756bd1007e12f.png?imageMogr2/auto-orient/strip|imageView2/2/w/687/format/webp)
步骤五
![](http://upload-images.jianshu.io/upload_images/12344097-e94ae859a511f987.png?imageMogr2/auto-orient/strip|imageView2/2/w/358/format/webp)
步骤六
![](http://upload-images.jianshu.io/upload_images/12344097-2c36ea8bce4f0b56.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-1f43ccdf1ad104e8.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
elasticsearch kibana 提示Your trial license is expired 问题解决
![](http://upload-images.jianshu.io/upload_images/12344097-59d5acad4b72feac.png?imageMogr2/auto-orient/strip|imageView2/2/w/658/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-13ac7fa9b043c36d.png?imageMogr2/auto-orient/strip|imageView2/2/w/336/format/webp)
选择Basic许可证
给索引设置生命周期
1.创建索引生命周期策略,超过30天自动删除索引
![](http://upload-images.jianshu.io/upload_images/12344097-8c49afe261a19c41.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-71acc5aad5edf21f.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-f0c43f0ef302347b.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-b1a3f4c0be147b89.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
2.设置索引模版
![](http://upload-images.jianshu.io/upload_images/12344097-7fcef7e22ded1811.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-99ae47b459586ddc.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-8ae423cd966259fa.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-e30af9f545f4983c.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-30f7908dc00f30b4.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-0d28a46d9e557f74.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
![](http://upload-images.jianshu.io/upload_images/12344097-decd66c7b7d6c672.png?imageMogr2/auto-orient/strip|imageView2/2/w/1200/format/webp)
PUT _ilm/policy/log_delete_policy { "policy": { "phases": { "delete": { "min_age": "5m", "actions": { "delete": {} } } } } }
PUT _template/logs_template
{
"index_patterns": ["testing*"],
"settings":{
"index.lifecycle.name": "log_delete_policy"
},
"mappings":{
"properties":{
"message":{
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}