首页 > 其他分享 >ELK日志收集&&日志收集方案

ELK日志收集&&日志收集方案

时间:2023-11-09 16:13:53浏览次数:37  
标签:ELK filebeat 收集 tomcat name 日志 logstash log

31. ELK日志收集

日志分析系统 - k8s部署ElasticSearch集群 - 帝都攻城狮 - 博客园 (cnblogs.com)

https://blog.csdn.net/miss1181248983/article/details/113773943

31.1 日志收集方式

  1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集。
  2.使用sidcar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共亭)。
  3.在容器内置日志收集服务进程。

31.2 daemonset日志收集

logstach容器内收集-->kafka-zk-->logstach过滤写入-->ES-cluster

  • 把日志挂载到宿主机进行收集
  基于daemonset运行日志收集服务,主要收集以下类型日志:
  1.node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。
  因为容器里的日志都是输出到标准输出、错误输出,然后需要提前把容器里的日志驱动与日志类型改成jsonfile类型
  实现方式:
  将容器内的日志改好jsonfile之后挂载到宿主机,在把宿主机的日志挂载到logstash中进行过滤,这样就收集起来了
  • 宿主机系统日志等以日志文件形式保存的日志
对比类型containerddocker
日志存储路径 真实路径:/var/log/pods/CONTAINER_NAMEs #真实路径<br />软连接:同时kubelet也会在/var/log/containers目录下创建软链接指向/var/log/pods/CONTAINER_NAMEs #真实路径<br />软连接:同时kubelet也会在/var/log/containers目录下创建软链接指向/var/log/pods/CONTAINER_NAMEs #真实路径<br />软连接:同时kubelet也会在/var/log/containers目录下创建软链接指向/var/log/pods/CONTAINER_NAMES 真实路径:/var/lib/docker/containers/软连接会在和创建软连接指向软连接会在和创建软连接指向CONTAINERID<br/>软连接:kubelet会在/var/log/pods和/var/log/containers创建软连接指向/var/lib/docker/containers/CONTAINERID
日志配置参数 配置文件:/etc/systemd/system/kubelet.service
配置参数:
--container-log-max-files=5
--container-log-max-size="10OMi"
--logging-format="json"
配置文件:/etc/docker/daemon.json
参数:"log-driver" : "json-file",
"log-opts" :{
"max-file" : "5",
"max-size": "100m"
}
  • Dockfile
  root@k8s-master1:~1.logstash-image-Dockerfile# cat Dockerfile
  FROM logstash:7.12.1
   
  USER root
  WORKDIR /usr/share/logstash
  #RUN rm -rf config/logstash-sample.conf
  ADD logstash.yml /usr/share/logstash/config/logstash.yml
  ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
  • logstash.conf
  #收集日志的路径为宿主机
  root@k8s-master1:~1.logstash-image-Dockerfile# cat logstash.conf
  input {
  file {
  #这个是docker路径
  #path => "/var/lib/docker/containers/*/*-json.log" #docker
  #containerd路径
  path => "/var/log/pods/*/*/*.log"
  #如何之前有存在的日志就从头收集,默认是从结尾收集
  start_position => "beginning"
  #如果是containerd类型就加上jsonfile-daemonset-applog
  type => "jsonfile-daemonset-applog"
  }
   
  file {
  #把宿主机的系统日志也收集过来 在k8s YAML中定义
  path => "/var/log/*.log"
  start_position => "beginning"
  #如果是系统日志就加上这个类型jsonfile-daemonset-syslog
  type => "jsonfile-daemonset-syslog"
  }
  }
   
  output {
  if [type] == "jsonfile-daemonset-applog" {
  kafka {
  #k8s YAML中定义的KAFKA变量
  bootstrap_servers => "${KAFKA_SERVER}"
  #k8s YAML中定义的TOPIC_ID
  topic_id => "${TOPIC_ID}"
  batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
  #编码json
  codec => "${CODEC}"
  } }
   
  if [type] == "jsonfile-daemonset-syslog" {
  kafka {
  bootstrap_servers => "${KAFKA_SERVER}"
  topic_id => "${TOPIC_ID}"
  batch_size => 16384
  codec => "${CODEC}" #系统日志不是json格式
  }}
  }
  • logstash.yaml
  root@k8s-master1:~1.logstash-image-Dockerfile# cat logstash.yml
  http.host: "0.0.0.0"
  #注释掉这个地址-xpack是一个安全认证
  #xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
  • build-command.sh
  root@k8s-master1:~1.logstash-image-Dockerfile# cat build-commond.sh
  #!/bin/bash
   
  #docker build -t harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v4 .
   
  #docker push harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v4
   
  nerdctl build -t harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1 .
   
  nerdctl push harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1
  • k8s YAML DaemonSet-logstash容器内收集
  root@k8s-master1:~/20220821/ELK/1.daemonset-logstash# cat 2.DaemonSet-logstash.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
  name: logstash-elasticsearch
  namespace: kube-system
  labels:
  k8s-app: logstash-logging
  spec:
  selector:
  matchLabels:
  name: logstash-elasticsearch
  template:
  metadata:
  labels:
  name: logstash-elasticsearch
  spec:
  tolerations:
  # this toleration is to have the daemonset runnable on master nodes
  # remove it if your masters can't run pods
  - key: node-role.kubernetes.io/master
  operator: Exists
  effect: NoSchedule
  containers:
  - name: logstash-elasticsearch
  image: harbor.nbrhce.com/baseimages/logstash:v7.12.1-json-file-log-v1
  env:
  - name: "KAFKA_SERVER"
  value: "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
  - name: "TOPIC_ID"
  value: "jsonfile-log-topic"
  - name: "CODEC"
  value: "json"
  # resources:
  # limits:
  # cpu: 1000m
  # memory: 1024Mi
  # requests:
  # cpu: 500m
  # memory: 1024Mi
  volumeMounts:
  - name: varlog #定义宿主机系统日志挂载路径
  mountPath: /var/log #宿主机系统日志挂载点
  - name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
  #mountPath: /var/lib/docker/containers #docker挂载路径
  mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
  readOnly: false
  terminationGracePeriodSeconds: 30
  volumes:
  #宿主机系统日志挂载logstash容器这样就能收集了
  - name: varlog
  hostPath:
  path: /var/log
  #宿主机containerd的日志挂载到logstash中
  - name: varlibdockercontainers
  hostPath:
  path: /var/lib/docker/containers
  path: /var/log/pods
  • logstach过滤日志 conf
  #这个是单独过滤日志的然后传给es集群
  root@k8s-master1:~1.daemonset-logstash# cat 3.logsatsh-daemonset-jsonfile-kafka-to-es.conf
  input {
  kafka {
  #kafka集群地址
  bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
  #来自于哪个topics
  topics => ["jsonfile-log-topic"]
  #编码是json
  codec => "json"
  }
  }
   
  output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "jsonfile-daemonset-applog" {
  elasticsearch {
  hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
  #如果这个索引不存在那么会自动创建
  index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
  }}
   
  if [type] == "jsonfile-daemonset-syslog" {
  elasticsearch {
  hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
  index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
  }}
   
  }

31.3 Sidcar容器日志收集

  • 概述 轻量级日志收集容器
  使用sidcar容器一个pod多容器收集当前pod内一个或多个业务容器的日志、通常基于emptyDir实现业务容器与sidcar之间的日志共享
  容器之间的文件系统是隔离的,通常emptyDir来实现日志的共享,应该就是把业务容器的日志路径挂载到emptyDir,sidcar容器收集日志的路径就是这个emptyDir
  优点:这样收集日志的好处就是可以精细化服务的日志
  缺点:就是占用资源要是有旧业务容器还需要改造POD添加sidcar容器
  • Dockerfile制作镜像
  root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile# cat Dockerfile
  FROM logstash:7.12.1
   
  USER root
  WORKDIR /usr/share/logstash
  #RUN rm -rf config/logstash-sample.conf
  ADD logstash.yml /usr/share/logstash/config/logstash.yml
  ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf
  • logstash.yaml
  root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile# cat logstash.yml
  http.host: "0.0.0.0"
  #xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
  • logstash.conf
  root@k8s-master1:~2.sidecar-logstash/1.logstash-image-Dockerfile# cat logstash.conf
  input {
  file {
  path => "/var/log/applog/catalina.out"
  start_position => "beginning"
  type => "app1-sidecar-catalina-log"
  }
  file {
  path => "/var/log/applog/localhost_access_log.*.txt"
  start_position => "beginning"
  type => "app1-sidecar-access-log"
  }
  }
   
  output {
  if [type] == "app1-sidecar-catalina-log" {
  kafka {
  bootstrap_servers => "${KAFKA_SERVER}"
  topic_id => "${TOPIC_ID}"
  batch_size => 16384 #logstash每次向ES传输的数据量大小,单位为字节
  codec => "${CODEC}"
  } }
   
  if [type] == "app1-sidecar-access-log" {
  kafka {
  bootstrap_servers => "${KAFKA_SERVER}"
  topic_id => "${TOPIC_ID}"
  batch_size => 16384
  codec => "${CODEC}"
  }}
  }
  • tomcat.yaml
  root@k8s-master1:~/20220821/ELK/2.sidecar-logstash# cat 2.tomcat-app1.yaml
  kind: Deployment
  #apiVersion: extensions/v1beta1
  apiVersion: apps/v1
  metadata:
  labels:
  app: magedu-tomcat-app1-deployment-label
  name: magedu-tomcat-app1-deployment #当前版本的deployment 名称
  namespace: magedu
  spec:
  replicas: 3
  selector:
  matchLabels:
  app: magedu-tomcat-app1-selector
  template:
  metadata:
  labels:
  app: magedu-tomcat-app1-selector
  spec:
  containers:
  - name: sidecar-container
  image: harbor.magedu.net/baseimages/logstash:v7.12.1-sidecar
  imagePullPolicy: IfNotPresent
  #imagePullPolicy: Always
  #将传递参数给kafka
  env:
  - name: "KAFKA_SERVER"
  value: "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
  - name: "TOPIC_ID"
  value: "tomcat-app1-topic"
  - name: "CODEC"
  value: "json"
  #挂载到容器里这个路径--配置文件与其对应这个路径
  volumeMounts:
  - name: applogs
  mountPath: /var/log/applog
  - name: magedu-tomcat-app1-container
  image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
  imagePullPolicy: IfNotPresent
  #imagePullPolicy: Always
  ports:
  - containerPort: 8080
  protocol: TCP
  name: http
  env:
  - name: "password"
  value: "123456"
  - name: "age"
  value: "18"
  resources:
  limits:
  cpu: 1
  memory: "512Mi"
  requests:
  cpu: 500m
  memory: "512Mi"
  volumeMounts:
  - name: applogs
  mountPath: /apps/tomcat/logs
  startupProbe:
  httpGet:
  path: /myapp/index.html
  port: 8080
  initialDelaySeconds: 5 #首次检测延迟5s
  failureThreshold: 3 #从成功转为失败的次数
  periodSeconds: 3 #探测间隔周期
  readinessProbe:
  httpGet:
  #path: /monitor/monitor.html
  path: /myapp/index.html
  port: 8080
  initialDelaySeconds: 5
  periodSeconds: 3
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3
  livenessProbe:
  httpGet:
  #path: /monitor/monitor.html
  path: /myapp/index.html
  port: 8080
  initialDelaySeconds: 5
  periodSeconds: 3
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3
  volumes:
  - name: applogs #定义通过emptyDir实现业务容器与sidecar容器的日志共享,以让sidecar收集业务容器中的日志
  emptyDir: {}

31.4 filebeat容器内置进程收集

  • Dockerfile 在做业务镜像的时候添加进去
  root@k8s-master1:~/20220821/ELK/3.container-filebeat-process/1.webapp-filebeat-image-Dockerfile# cat Dockerfile
  #tomcat web1
  FROM harbor.magedu.net/pub-images/tomcat-base:v8.5.43
   
  ADD catalina.sh /apps/tomcat/bin/catalina.sh
  ADD server.xml /apps/tomcat/conf/server.xml
  #ADD myapp/* /data/tomcat/webapps/myapp/
  ADD myapp.tar.gz /data/tomcat/webapps/myapp/
  ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
  ADD filebeat.yml /etc/filebeat/filebeat.yml
  RUN chown -R tomcat.tomcat /data/ /apps/
  #ADD filebeat-7.5.1-x86_64.rpm /tmp/
  #RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb
   
  EXPOSE 8080 8443
   
  CMD ["/apps/tomcat/bin/run_tomcat.sh"]
  • filebeat配置文件
  root@k8s-master1:~1.webapp-filebeat-image-Dockerfile# cat filebeat.yml
  #采集日志
  filebeat.inputs:
  - type: log
  #这个enabled是启用这段配置、不是true就不会加载
  enabled: true
  paths:
  #收集业务容器日志-运行日志
  - /apps/tomcat/logs/catalina.out
  fields:
  #定义的类型与名字
  type: filebeat-tomcat-catalina
  - type: log
  #在定义一个类型访问日志
  enabled: true
  paths:
  - /apps/tomcat/logs/localhost_access_log.*.txt
  fields:
  type: filebeat-tomcat-accesslog
  #这里是默认的配置文件 可以不用动
  filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  setup.template.settings:
  index.number_of_shards: 1
  setup.kibana:
   
  #这里是输出到哪里
  output.kafka:
  hosts: ["172.31.4.101:9092"]
  #确认ack保证数据完整性
  required_acks: 1
  #写的kafka中的topic
  topic: "filebeat-magedu-app1"
  #开启压缩节省带宽但是占CPU
  compression: gzip
  #最大字节不能超过这个值
  max_message_bytes: 1000000
  #output.redis:
  # hosts: ["172.31.2.105:6379"]
  # key: "k8s-magedu-app1"
  # db: 1
  # timeout: 5
  # password: "123456"
  • 运行命令
  root@k8s-master1:~1.webapp-filebeat-image-Dockerfile# cat run_tomcat.sh
  #!/bin/bash
  #echo "nameserver 223.6.6.6" > /etc/resolv.conf
  #echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts
   
  /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
  su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
  tail -f /etc/hosts
  • k8s filebeat 账号
  #如果你是通过daemset部署filebeat那么是需要授权的但是目前的filebeat是在pod中运行的这个服务账号可以先不执行
  root@k8s-master1:~3.container-filebeat-process# cat 2.filebeat-serviceaccount.yaml
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
  name: filebeat-serviceaccount-clusterrole
  labels:
  k8s-app: filebeat-serviceaccount-clusterrole
  rules:
  - apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
   
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRoleBinding
  metadata:
  name: filebeat-serviceaccount-clusterrolebinding
  subjects:
  - kind: ServiceAccount
  name: default
  namespace: magedu
  roleRef:
  kind: ClusterRole
  name: filebeat-serviceaccount-clusterrole
  apiGroup: rbac.authorization.k8s.io
  • YAML
  root@k8s-master1:~3.container-filebeat-process# cat 3.tomcat-app1.yaml
  kind: Deployment
  #apiVersion: extensions/v1beta1
  apiVersion: apps/v1
  metadata:
  labels:
  app: magedu-tomcat-app1-filebeat-deployment-label
  name: magedu-tomcat-app1-filebeat-deployment
  namespace: magedu
  spec:
  replicas: 1
  selector:
  matchLabels:
  app: magedu-tomcat-app1-filebeat-selector
  template:
  metadata:
  labels:
  app: magedu-tomcat-app1-filebeat-selector
  spec:
  containers:
  - name: magedu-tomcat-app1-filebeat-container
  image: harbor.magedu.net/magedu/tomcat-app1:v1-filebeat
  imagePullPolicy: IfNotPresent
  #imagePullPolicy: Always
  ports:
  - containerPort: 8080
  protocol: TCP
  name: http
  env:
  - name: "password"
  value: "123456"
  - name: "age"
  value: "18"
  resources:
  limits:
  cpu: 1
  memory: "512Mi"
  requests:
  cpu: 500m
  memory: "512Mi"
  • service.yaml
  #做测试
  root@k8s-master1:~3.container-filebeat-process# cat 4.tomcat-service.yaml
  ---
  kind: Service
  apiVersion: v1
  metadata:
  labels:
  app: magedu-tomcat-app1-filebeat-service-label
  name: magedu-tomcat-app1-filebeat-service
  namespace: magedu
  spec:
  type: NodePort
  ports:
  - name: http
  port: 80
  protocol: TCP
  targetPort: 8080
  nodePort: 30092
  selector:
  app: magedu-tomcat-app1-filebeat-selector
  • logstash 的配置文件传给ES
  root@k8s-master1:~3.container-filebeat-process# cat 5.logstash-filebeat-process-kafka-to-es.conf
  input {
  kafka {
  bootstrap_servers => "172.31.4.101:9092,172.31.4.102:9092,172.31.4.103:9092"
  topics => ["filebeat-magedu-app1"]
  codec => "json"
  }
  }
   
  output {
  if [fields][type] == "filebeat-tomcat-catalina" {
  elasticsearch {
  hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
  index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
  }}
   
  if [fields][type] == "filebeat-tomcat-accesslog" {
  elasticsearch {
  hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
  index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
  }}
   
  }

标签:ELK,filebeat,收集,tomcat,name,日志,logstash,log
From: https://www.cnblogs.com/gaoyanbing/p/17822047.html

相关文章

  • loki采集k8s日志
    前言loki是轻量、易用的日志聚合系统。如果你的k8s集群规模并不大,推荐使用grafana+loki的方案来做微服务日志的采集;Loki组成loki架构很简单,主要由3部分组成:loki:服务端,负责存储日志和处理查询;promtail:采集端,负责采集日志发送给loki;grafana:负责采集日志的展示;promtail.ymlkubectl......
  • 最新WAF信息收集技术
    WAF的详细介绍将在第5章展开,本节针对WAF信息收集进行讲解。目前,市面上的WAF大多都部署了云服务进行防护加固,让WAF的防护性能得到进一步提升。图1-32所示为安全狗最新版服务界面,增加了“加入服云”选项。  图1-32   安全狗最新版服务界面,不仅加强了传统的WAF防护层,还......
  • k8s通过sidecar模式收集pod的容器日志至ELK
    架构:已完成的部署1、ES集群及kibana部署 https://blog.51cto.com/yht1990/60809812、kafaka+zookeeper集群 https://blog.51cto.com/yht1990/6081518准备sidecar镜像(filebeat)找一台服务器打镜像[root@yw-testfilebeat]#catDockerfileFROMdocker.elastic.co/beats/f......
  • salt自定义模块内使用日志例子
    如果你想要在你的SaltMinion中使用自定义的Salt模块并且记录日志,你可以创建一个自定义Salt模块,并在模块中使用Python的标准`logging`库来记录日志。以下是一个示例:首先,在SaltMaster上创建一个自定义模块的目录,例如`/srv/salt/_modules/`。然后在该目录中创建一个Python文件,例......
  • 通过日志恢复SQL Server的历史数据
    通过日志还原方案一:前提条件1.必须有一个完整的备份,且这个备份必须是在修改、删除数据之前做的。2.在更新、删除数据之后,做日志备份,该log备份将用于还原之前的数据建议使用备份数据库进行还原操作,确认无误再对原库进行操作,或同步数据开始还原操作:此处使用SQLmanagement界......
  • mysql8.x通过备份文件及binlog日志恢复数据
    问题简述记一次mysql数据库被误删(是整个库被删了)后的还原前提条件数据库版本为mysql8.x以上具有库被删除前的完整备份数据库开启binlog还原步骤第一步:通过完整备份还原被删的库注意事项:还原后切勿让其他用户连接,操作数据库。待使用binlog日志恢复数据后再对库进行操作,否......
  • 209-logback-spring.xml,指定日志输出到指定文件
    logback-spring.xml,指定日志输出到指定文件<configuration><!--定义日志输出路径--><propertyname="LOG_HOME"value="/path/to/log/directory"/><!--定义日志格式--><propertyname="LOG_PATTERN"value=......
  • openGauss学习笔记-118 openGauss 数据库管理-设置数据库审计-维护审计日志
    openGauss学习笔记-118openGauss数据库管理-设置数据库审计-维护审计日志118.1前提条件用户必须拥有审计权限。118.2背景信息与审计日志相关的配置参数及其含义请参见表1。表1审计日志相关配置参数配置项含义默认值audit_directory审计文件的存储目录。/......
  • Python如何将日志输入到文件里
    要将日志输出到文件中,你可以使用Python标准库的`logging`模块。以下是一个示例,演示如何配置`logging`模块来记录日志到文件:```pythonimportlogging#配置日志log_file='/path/to/your/logfile.log'logging.basicConfig(  filename=log_file,  level=logging.DEBU......
  • 有趣的Java之记录用户操作日志
    Java记录操作日志java自带的日志框架是java.util.logging(JUL),从JDK1.4(2002)开始捆绑在JDK中。可以使用JUL来记录操作日志。以下是使用JUL记录事务的示例://java.util.loggingjava.util.logging.Loggerlogger=java.util.logging.Logger.getLogger(this.getClass().getName());......