k8s集群部署filebeat
filebeat需要采集每个节点的容器日志,所以我们选择daemonset的方式
# cat filebeat-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
app: filebeat-clsterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- events
- namespaces
- pods
verbs:
- get
- watch
- list
- apiGroups:
- ""
resourceNames:
- filebeat-prospectors
resources:
- configmaps
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
labels:
app: filebeat-clusterrolebinding
roleRef:
apiGroup: ""
kind: ClusterRole
name: filebeat
subjects:
- apiGroup: ""
kind: ServiceAccount
name: filebeat
namespace: elk
# cat filebeat-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: elk
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
tags: ["k8s"]
fields:
log_topic: k8s
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
index.codec: best_compression
_source.enabled: false
output.kafka:
hosts: ["kafka-kraft-statefulset-0.kafka-kraft-svc:9091","kafka-kraft-statefulset-1.kafka-kraft-svc:9091","kafka-kraft-statefulset-2.kafka-kraft-svc:9091"]
topic: '%{[fields.log_topic]}'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# cat filebeat.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: 3.127.33.174:8443/elk/filebeat:8.1.0
imagePullPolicy: IfNotPresent
command: [
"filebeat",
"-e",
"-c", "/etc/filebeat.yml"
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 1000Mi
cpu: 1000m
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: dockerlog
mountPath: /home/docker/docker/containers
- name: varlog
mountPath: /var/log/
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0644
name: filebeat-configmap
- name: dockerlog
hostPath:
path: /home/docker/docker/containers/
- name: varlog
hostPath:
path: /var/log/
- name: data
hostPath:
path: /home/k8s/data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations:
- effect: NoExecute
key: dedicated
operator: Equal
value: gpu
- effect: NoSchedule
operator: Exists
[root@k8s-master01 filebeat]# kubectl get pods -n elk -o wide | grep filebeat
filebeat部署完毕,可以进入kafka中查看是否正常往topic中推送数据
也可以通过挂载容器日志的方式去采集每个容器的日志
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: elk
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/pods/*.log
fields:
node: ${HOSTNAME}
fields_under_root: true
# 此参数非常重要,filebeat默认会忽略软链接
symlinks: true
tags: ["k8s"]
标签:elk,filebeat,name,部署,kafka,k8s,metadata
From: https://www.cnblogs.com/precomp/p/16741671.html