实现:收集容器控制台日志
部署daemonset filebeat,如上图红框位置
找一台机器打镜像
FROM docker.elastic.co/beats/filebeat:7.9.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
docker build . -t 10.0.7.12/k8s/filebeat:7.9.0
docker push 10.0.7.12/k8s/filebeat:7.9.0
filebeat配置文件
cat filebeat.yml
filebeat.inputs:
- input_type: log
paths:
# 收集日志的路径
- /var/log/pods/*/*/*.log
fields:
# 设置字段
log_topic: ${TOPIC_ID}
tail_files: true
# 删除文件状态,该clean_inactive设置必须大于以ignore_older + scan_frequency(简隔多久扫描一次日志文件,默认十秒)
clean_inactive: 48h
# 超过该时间之前更新的文件filebeta不采集,默认关闭(ignore_older为大于close_inactive)
ignore_older: 24h
# harvester读取文件最后一行后,1分钟后日志文件没有变动。关闭文件句柄
close_inactive: 1m
output.kafka:
# 获取集群元数据的 Kafka 代理地址列表
hosts: ["10.0.7.53:9092", "10.0.7.54:9092", "10.0.7.55:9092"]
# 此配置使用自定义字段 fields.log_topic来为每个事件设置主题
topic: '%{[fields.log_topic]}'
partition.round_robin:
# reachable_only设置为true,事件将仅发布到可用分区
reachable_only: true
# ACK 可靠性级别。0=无响应,1=等待本地提交,-1=等待所有副本提交。默认值为 1
required_acks: 1
# 设置输出压缩编解码器。必须是none、snappy和lz4之一gzip。默认值为gzip。
compression: gzip
# JSON 编码消息的最大允许大小。更大的消息将被丢弃。默认值为 1000000(字节
max_message_bytes: 1000000
# filebeat只收级错误日志
logging.level: error
在k8s集群部署daemonset
cat daemonset-filebeat.yaml标签:10.0,filebeat,name,daemonset,日志,k8s,log From: https://blog.51cto.com/yht1990/6085704
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat-kafka
namespace: kube-system
labels:
k8s-app: filebeat-kafka
spec:
selector:
matchLabels:
name: filebeat-kafka
template:
metadata:
labels:
name: filebeat-kafka
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: filebeat-kafka
image: 10.0.7.12/k8s/filebeat:7.9.0
imagePullPolicy: Always
env:
- name: "TOPIC_ID"
value: "daemonset-pod-console-log"
volumeMounts:
- name: containerdlog # 定义容器日志挂载路径,和logstash配置文件中的收集路径保持一直
mountPath: /var/log/pods # containerd挂载路径,此路径与logstash的日志收集路径必须一致
readOnly: false
volumes:
- name: containerdlog
hostPath:
path: /var/log/pods #containerd的宿主机日志路径
kubectl apply -f daemonset-filebeat.yaml