首页 > 其他分享 >云原生第七周-k8s日志收集

云原生第七周-k8s日志收集

时间:2023-07-03 15:24:45浏览次数:41  
标签:原生 filebeat log tomcat app1 日志 k8s logstash

k8s日志收集

日志收集的目的:

  • 分布式日志数据统一收集,实现集中式查询和管理
  • 故障排查
  • 安全信息和事件管理
  • 报表统计及展示功能

日志收集的价值:

  • 日志查询,问题排查,故障恢复,故障自愈
  • 应用日志分析,错误报警
  • 性能分析,用户行为分析

日志收集方式:

  1. node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集。
  2. 使用sidcar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)。
  3. 在容器内置日志收集服务进程。

安装elasticsearch

计划在192.168.110.208 192.168.110.209安装elasticsearch

在elasticsearch官网下载elasticsearch-7.12.1-amd64.deb 执行安装命令
dpkg -i elasticsearch-7.12.1-amd64.deb
进入/etc/elasticsearch 修改 elasticsearch.yml
image

image

执行 systemctl restart elasticsearch.service 启动es

然后浏览器访问9200端口检查

image

使用插件检查集群情况
image

安装zookeeper

image

mv apache-zookeeper-3.7.1-bin /usr/local/zookeeper

root@192:/usr/local/zookeeper/conf# cat /usr/local/zookeeper/conf/zoo_sample.cfg 
# The number of milliseconds of each tick
tickTime=2000           ##数据周期时间
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5              ##周期数
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/zookeeper    ##zookeeper数据目录
# the port at which the clients will connect
clientPort=2181            ##客户端端口
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

server.1=192.168.110.208:2888:3888
server.2=192.168.110.209:2888:3888

创建目录数据目录

root@192:/usr/local/zookeeper/conf# mkdir -p /data/zookeeper
208服务器id是1 209服务器id是2
root@192:/usr/local/zookeeper/conf# echo 1 > /data/zookeeper/myid

root@192:/usr/local/zookeeper/conf# echo 2 > /data/zookeeper/myid

快速启动两个zookeeper

/usr/local/zookeeper/bin/zkServer.sh start
查看zookeeper是否启动成功,以及是否是集群

image

image

安装kafka

mv kafka_2.13-2.6.2 /usr/local/kafka

修改server.properties配置文件
cat /usr/local/kafka/config/server.properties

root@192:/usr/local/kafka/config# cat /usr/local/kafka/config/server.properties 

listeners=PLAINTEXT://192.168.110.208:9092   ## 监听ip和端口

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/kafka-logs


num.partitions=1


num.recovery.threads.per.data.dir=1


offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=192.168.110.208:2181,192.168.110.209:2181  #zookeeper集群ip 重要

zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0


执行启动脚本

/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

image

安装offset explorer软件 如图配置

image

image

image

image

安装logstash

解压安装包 进入logstash目录,添加配置文件

dpkg -i logstash-7.12.1-amd64.deb


root@192:/etc/logstash/conf.d# pwd
/etc/logstash/conf.d

root@192:/etc/logstash/conf.d# cat log-to-es.conf
input {
  kafka {
    bootstrap_servers => "192.168.110.208:9092,192.168.110.209:9092"
    topics => ["jsonfile-log-topic"]
    codec => "json"
  }
}


output {
  #if [fields][type] == "app1-access-log" {
  if [type] == "jsonfile-daemonset-applog" {
    elasticsearch {
      hosts => ["192.168.110.208:9200","192.168.110.209:9200"]
      index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
    }}

  if [type] == "jsonfile-daemonset-syslog" {
    elasticsearch {
      hosts => ["192.168.110.208:9200","192.168.110.209:9200"]
      index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
    }}

}

root@192:/etc/logstash/conf.d# systemctl start logstash.service

image

日志示例一:daemonset收集日志

基于daemonset运行日志收集服务,主要收集以下类型日志:

  1. node节点收集,基于daemonset部署日志收集进程,实现json-file类型(标准输出/dev/stdout、错误输出/dev/stderr)日志收集,即应用程序产生的标准输出和错误输出的日志。
  2. 宿主机系统日志等以日志文件形式保存的日志。

image

daemonset收集日志架构

image

daemonset yaml文件


apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: logstash-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: logstash-logging
spec:
  selector:
    matchLabels:
      name: logstash-elasticsearch
  template:
    metadata:
      labels:
        name: logstash-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: logstash-elasticsearch
        image: harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-json-file-log-v1
        env:
        - name: "KAFKA_SERVER"
          value: "192.168.110.208:9092,192.168.110.209:9092"
        - name: "TOPIC_ID"
          value: "jsonfile-log-topic"
        - name: "CODEC"
          value: "json"
#        resources:
#          limits:
#            cpu: 1000m
#            memory: 1024Mi
#          requests:
#            cpu: 500m
#            memory: 1024Mi
        volumeMounts:
        - name: varlog #定义宿主机系统日志挂载路径
          mountPath: /var/log #宿主机系统日志挂载点
        - name: varlibdockercontainers #定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
          #mountPath: /var/lib/docker/containers #docker挂载路径
          mountPath: /var/log/pods #containerd挂载路径,此路径与logstash的日志收集路径必须一致
          readOnly: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log #宿主机系统日志
      - name: varlibdockercontainers
        hostPath:
          #path: /var/lib/docker/containers #docker的宿主机日志路径
          path: /var/log/pods #containerd的宿主机日志路径

image

此时elasticsearch可以看到数据

image

安装kibana

dpkg -i kibana-7.12.1-amd64.deb

root@192:/etc/kibana# cat /etc/kibana/kibana.yml  |grep -v "^#" | grep -v "^$"
server.host: "192.168.110.206"
elasticsearch.hosts: ["http://192.168.110.208:9200"]
i18n.locale: "zh-CN"

root@192:/etc/kibana# systemctl start kibana.service 
root@192:/etc/kibana# systemctl status kibana.service 
● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
     Active: active (running) since Wed 2023-06-28 06:23:01 UTC; 7s ago
       Docs: https://www.elastic.co
   Main PID: 2852340 (node)
      Tasks: 11 (limit: 4530)
     Memory: 192.9M
        CPU: 8.034s
     CGroup: /system.slice/kibana.service
             └─2852340 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist --logging.dest=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid

进入kibana 如图 进入stack management创建索引

image

image

进入discover 可以看到系统日志

image

日志收集之二:sidecar 模式实现日志收集

使用sidcar容器(一个pod多容器)收集当前pod内一个或者多个业务容器的日志(通常基于emptyDir实现业务容器与sidcar之间的日志共享)。

与Damonset收集方式比较,缺点是占用的内存更高,更消耗资源;优点是针对每个pod执行日志收集,可以对日志有索引,日志归属有更好的区分,

image

1 构建sidecar镜像


root@192:/usr/local/src/20230528/elk-cases/2.sidecar-logstash/1.logstash-image-Dockerfile# cat Dockerfile 
FROM logstash:7.12.1


USER root
WORKDIR /usr/share/logstash 
#RUN rm -rf config/logstash-sample.conf
ADD logstash.yml /usr/share/logstash/config/logstash.yml
ADD logstash.conf /usr/share/logstash/pipeline/logstash.conf 
root@192:/usr/local/src/20230528/elk-cases/2.sidecar-logstash/1.logstash-image-Dockerfile# cat build-commond.sh 
#!/bin/bash

#docker build -t harbor.magedu.local/baseimages/logstash:v7.12.1-sidecar .

#docker push harbor.magedu.local/baseimages/logstash:v7.12.1-sidecar
nerdctl  build -t harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar .
nerdctl push harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar
root@192:/usr/local/src/20230528/elk-cases/2.sidecar-logstash/1.logstash-image-Dockerfile# cat logstash.conf 
input {
  file {
    path => "/var/log/applog/catalina.out"
    start_position => "beginning"
    type => "app1-sidecar-catalina-log"
  }
  file {
    path => "/var/log/applog/localhost_access_log.*.txt"
    start_position => "beginning"
    type => "app1-sidecar-access-log"
  }
}

output {
  if [type] == "app1-sidecar-catalina-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384  #logstash每次向ES传输的数据量大小,单位为字节
      codec => "${CODEC}" 
   } }

  if [type] == "app1-sidecar-access-log" {
    kafka {
      bootstrap_servers => "${KAFKA_SERVER}"
      topic_id => "${TOPIC_ID}"
      batch_size => 16384
      codec => "${CODEC}"
  }}
}


image

2部署web服务


root@192:/usr/local/src/20230528/elk-cases/2.sidecar-logstash# cat 2.tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-deployment-label
  name: magedu-tomcat-app1-deployment #µ±Ç°°æ±¾µÄdeployment Ãû³Æ
  namespace: magedu
spec:
  replicas: 2
  selector:
    matchLabels:
      app: magedu-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-container
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
        volumeMounts:
        - name: applogs
          mountPath: /apps/tomcat/logs
        startupProbe:
          httpGet:
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5 #Ê״μì²âÑÓ³Ù5s
          failureThreshold: 3  #´Ó³É¹¦×ªÎªÊ§°ÜµÄ´ÎÊý
          periodSeconds: 3 #̽²â¼ä¸ôÖÜÆÚ
        readinessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /myapp/index.html
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      - name: sidecar-container
        image: harbor.linuxarchitect.io/baseimages/logstash:v7.12.1-sidecar
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        env:
        - name: "KAFKA_SERVER"
          value: "192.168.110.208:9092,192.168.110.209:9092"
        - name: "TOPIC_ID"
          value: "tomcat-app1-topic"
        - name: "CODEC"
          value: "json"
        volumeMounts:
        - name: applogs
          mountPath: /var/log/applog
      volumes:
      - name: applogs #
        emptyDir: {}


image

image

日志示例之三:容器内置日志收集服务进程简介

image

制作镜像

查看 build脚本和dockerfile

root@192:/usr/local/src/20230528/elk-cases/3.container-filebeat-process/1.webapp-filebeat-image-Dockerfile# cat build-command.sh 
#!/bin/bash
TAG=$1
#docker build -t  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG} .
#sleep 3
#docker push  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}
nerdctl build -t  harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}  .
nerdctl push harbor.linuxarchitect.io/magedu/tomcat-app1:${TAG}
root@192:/usr/local/src/20230528/elk-cases/3.container-filebeat-process/1.webapp-filebeat-image-Dockerfile# cat Dockerfile 
#tomcat web1
FROM harbor.linuxarchitect.io/pub-images/tomcat-base:v8.5.43 

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml 

ADD myapp.tar.gz /data/tomcat/webapps/myapp/
RUN chown  -R tomcat.tomcat /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]


查看添加的filebeat配置文件

root@192:/usr/local/src/20230528/elk-cases/3.container-filebeat-process/1.webapp-filebeat-image-Dockerfile# cat filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: filebeat-tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt 
  fields:
    type: filebeat-tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.kafka:
  hosts: ["192.168.110.208:9092","192.168.110.209:9092"]
  required_acks: 1
  topic: "filebeat-magedu-app1"
  compression: gzip
  max_message_bytes: 1000000


执行sh build-command.sh v4

image

查看tomcat.yaml

root@192:/usr/local/src/20230528/elk-cases/3.container-filebeat-process# cat 3.tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: magedu-tomcat-app1-filebeat-deployment-label
  name: magedu-tomcat-app1-filebeat-deployment
  namespace: magedu
spec:
  replicas: 2
  selector:
    matchLabels:
      app: magedu-tomcat-app1-filebeat-selector
  template:
    metadata:
      labels:
        app: magedu-tomcat-app1-filebeat-selector
    spec:
      containers:
      - name: magedu-tomcat-app1-filebeat-container
        image: harbor.linuxarchitect.io/magedu/tomcat-app1:v4
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"


查看logestash.conf配置文件

  kafka {
    bootstrap_servers => "172.31.2.107:9092,172.31.2.108:9092,172.31.2.109:9092"
    topics => ["jsonfile-log-topic","tomcat-app1-topic","filebeat-magedu-app1"]
    codec => "json"
  }

}




output {
  if [fields][type] == "filebeat-tomcat-catalina" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
    }}

  if [fields][type] == "filebeat-tomcat-accesslog" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
    }}


  if [type] == "app1-sidecar-access-log" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "sidecar-app1-accesslog-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "app1-sidecar-catalina-log" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "sidecar-app1-catalinalog-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "jsonfile-daemonset-applog" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
    }}

  if [type] == "jsonfile-daemonset-syslog" {
    elasticsearch {
      hosts => ["172.31.2.101:9200","172.31.2.102:9200"]
      index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
    }}

}



image

image

查看kafka

image

查看es

image

标签:原生,filebeat,log,tomcat,app1,日志,k8s,logstash
From: https://www.cnblogs.com/zhaoxiangyu-blog/p/17494160.html

相关文章

  • 云原生时代,如何通过 KubeSphere x 极狐GitLab 构建安全应用?
    本文整理自云原生Meetup杭州站上,极狐(GitLab)DevOps技术布道师马景贺的演讲。当听到云原生的时候,你会想起什么?可能很多人很自然地就会想到Kubernetes、容器、微服务、开源等等,这些关键词是我们接触云原生绕不开的话题。但是以上还少了一个关键词:安全。云原生从2013年出现,201......
  • K8S 部署seata 1.6.x高可用集群
    写在之前seata1.6无法注册到nacos配置中心下面有说原因。2023年7月2日gshelldon写的博客。大于1.4版本之后,配置文件就不是用registry.conf了所以按照官方的文档搭建都是坑。默认使用application.yml的配置文件进行管理。都是坑!!!害我排查了几天。所以官方给出的ya......
  • k8s部署
    1、下载并解压sealostarzxvfsealos_4.1.3_linux_amd64.tar.gzcpsealos/usr/bin/2、导入镜像,只在节点master1上导入即可sealosload-i/app/k8s-1.21/k8s-1.21.0.tarsealosload-i/app/k8s-1.21/calico-3.22.1.tar3、部署k8s集群方式一:sealosrunlabring/kubernetes:v1......
  • Linux 日志管理
    Linux日志管理原创 Lyle_Tu Linux分布式主任 2023-07-0117:54 发表于福建收录于合集#linux36个#服务器18个介绍    Linux日志管理是指对Linux系统中产生的各种日志文件进行收集、分析、备份、轮转和删除等操作,以便监控系统的运行状况,诊断和解决问题,......
  • BackUpLogView 系列 - 生成日志数据库脚本(MS Sql Server)
     在企业管理器中执行脚本CREATEDATABASE[BackupLogview]ONPRIMARY(NAME=N'BackupLogview',FILENAME=N'C:\DATA\BackupLogview.mdf',SIZE=3072KB,MAXSIZE=UNLIMITED,FILEGROWTH=1024KB)LOGON(NAME=N'BackupLogview_log',F......
  • 怎么样可以算一名云原生工程师?
    这个问题来自前段时间知识星球里面一位同学的提问:广义上的云原生工程师需要区基础设施工程师和业务实现工程师吗?怎么样才算一名云原生工程师?招聘软件上大多招聘Go的高级开发工程师,任职要求里面很明显是业务设计开发,但是也会要求熟悉Kubernetes,深入理解容器,Devops等。因为我......
  • log4j无法打印日志的问题
     这个问题提出来一直没人解决,最后找到毛病了,发在这里            生产系统升级后发现接口无法打印日志,web层无法打印,service层可以打印,检查日志发现:[09-12-1515:53:45:617CST]00000030SystemErrRlog4j:ERRORCouldnotfindvalueforkeylog4j.appen......
  • k8s进阶7-initContainer的场景灵活应用
    一、initContainer工作原理初始化容器是在pod的主容器启动之前要运行的容器,主要是做一些主容器的前置工作,它具有两大特征:1、初始化容器必须运行完成直至结束,若某初始化容器运行失败,那么kubernetes需要重启它直到成功完成;2、初始化容器必须按照定义的顺序执行,当且仅当前一个成功之......
  • 在 Kubernetes(k8s) 上部署 Spring Boot 应用程序:应用程序使用环境变量中的错误端口属
    如果我使用此配置启动部署(即先启动服务,然后启动部署)则pod在启动期间会失败。在日志中,我可以找到以下消息:***************************APPLICATIONFAILEDTOSTART***************************Description:Bindingtotargetorg.springframework.boot.autoconfigu......
  • CentOS安装k8s
    1.系统配置硬件配置基本要求资源大小硬盘>=20Gcpu>=2核内存>=2G本教程配置主机名IP配置master192.168.10.1553核+2G+20Gworker1192.168.10.2343核+2G+20Gworker2192.168.10.1473核+2G+20G2.安装必要软件所有机器都要......