在云原生环境中,日志收集是保障系统可观测性的重要环节。本文将手把手教你通过Helm Chart部署Filebeat,实现Kubernetes容器日志的自动化采集与Kafka投递。

一、环境准备
已部署的Kubernetes集群(v1.18+) Helm包管理器(v3.0+) Kafka集群(作为日志接收端) 私有镜像仓库(用于存储自定义镜像)
二、Chart包管理
2.1 获取官方Chart
$ helm repo add elastic https://helm.elastic.co --force-update
"elastic" has been added to your repositories
$ helm pull elastic/filebeat --version 7.17.3复制
2.2 推送至私有仓库
$ helm push filebeat-7.17.3.tgz oci://core.jiaxzeng.com/plugins
Pushed: core.jiaxzeng.com/plugins/filebeat:7.17.3
Digest: sha256:76778389d4c793b414d392e9283851b7356feec9619dd37f0b7272c8ce42bf01复制
2.3 本地加载Chart
$ sudo helm pull oci://core.jiaxzeng.com/plugins/filebeat --version 7.17.3 --untar --untardir etc/kubernetes/addons/
Pulled: core.jiaxzeng.com/plugins/filebeat:7.17.3
Digest: sha256:76778389d4c793b414d392e9283851b7356feec9619dd37f0b7272c8ce42bf01复制
三、核心配置解析
3.1 DaemonSet配置
daemonset:
enabled:true
resources:
requests:
cpu:"100m"
memory:"100Mi"
limits:
cpu:"1000m"
memory:"200Mi"复制
❝必须使用DaemonSet确保每个节点运行实例
3.2 日志采集配置
filebeat.inputs:
-type:container
paths:
-/var/log/containers/*.log
fields:# 自定义额外日志字段,用于发送到kafka不同topic中
type:k8s_logs
processors:
-add_kubernetes_metadata:# 自动注入K8s元数据
host:${NODE_NAME}
matchers:
-logs_path:
logs_path:"/var/log/containers/"
-type:log
fields:
type:messages
paths:
-/var/log/messages复制
❝路径说明:
/var/log/containers/*.log:Kubelet生成的容器标准输出日志 /var/log/messages:系统级日志(需确保节点路径存在)
3.3 Kafka输出配置
output.kafka:
hosts:["172.139.20.17:9092","172.139.20.81:9092","172.139.20.177:9092"]
# 根据不同自定义字段,区分不同的topic
topics:
-topic:'k8s_logs'
when.equals:
fields.type:k8s_logs
-topic:'messages'
when.equals:
fields.type:messages
partition.round_robin:# 轮询
reachable_only:true
required_acks:1# Kafka确认机制
compression:gzip# 节省带宽消耗
max_message_bytes:1000000# 控制单条日志大小复制
3.4 全量配置文件
fullnameOverride: "filebeat"
image:"core.jiaxzeng.com/library/filebeat"
hostPathRoot:/var/lib
tolerations:
-effect:NoSchedule
operator:Exists
daemonset:
enabled:true
resources:
requests:
cpu:"100m"
memory:"100Mi"
limits:
cpu:"1000m"
memory:"200Mi"
filebeatConfig:
filebeat.yml:|
# logging.level: debug
filebeat.inputs:
- type: container
paths:
- var/log/containers/*.log
fields:
type: k8s_logs
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- type: log
enabled: true
fields:
type: messages
paths:
- var/log/messages
output.kafka:
hosts:["172.139.20.17:9092","172.139.20.81:9092","172.139.20.177:9092"]
topics:
-topic:'k8s_logs'
when.equals:
fields.type:k8s_logs
-topic:'messages'
when.equals:
fields.type:messages
partition.round_robin:
reachable_only:true
required_acks:1
compression:gzip
max_message_bytes:1000000复制
四、部署实施
4.1 执行安装命令
$ helm -n obs-system install filebeat -f etc/kubernetes/addons/filebeat-values.yaml etc/kubernetes/addons/filebeat
复制
4.2 验证DaemonSet状态
$ kubectl -n obs-system get pods -l app=filebeat -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
filebeat-6swzf 1/1 Running 1 (69m ago) 8h 10.244.217.116 k8s-node04 <none> <none>
filebeat-dkp8q 1/1 Running 1 (70m ago) 8h 10.244.135.150 k8s-node03 <none> <none>
filebeat-htpf6 1/1 Running 1 (70m ago) 8h 10.244.122.132 k8s-master02 <none> <none>
filebeat-qm84m 1/1 Running 1 (70m ago) 8h 10.244.85.232 k8s-node01 <none> <none>
filebeat-t7vf4 1/1 Running 1 (70m ago) 8h 10.244.32.132 k8s-master01 <none> <none>
filebeat-vkbbk 1/1 Running 1 (70m ago) 8h 10.244.195.5 k8s-master03 <none> <none>
filebeat-z6k2v 1/1 Running 1 (70m ago) 8h 10.244.58.198 k8s-node02 <none> <none>复制
五、数据验证
5.1 检查Kafka消费端
$ ./kafka-get-offsets.sh --bootstrap-server 172.139.20.17:9092 --topic k8s_logs
k8s_logs:0:296025
k8s_logs:1:297971
k8s_logs:2:297818
k8s_logs:3:296924
k8s_logs:4:296992
k8s_logs:5:297129复制
六、结语
掌握容器化 Helm 部署 FileBeat 的方法,无疑为你的数据处理工作增添强大助力,快去实践,开启高效数据之旅吧!
别忘了,关注我们的公众号,获取更多关于容器技术和云原生领域的深度洞察和技术实战,让我们携手在技术的海洋中乘风破浪!
文章转载自Linux运维智行录,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。
评论
相关阅读
下一代数据架构全景:云原生实践、行业解法与 AI 底座 | Databend Meetup 成都站回顾
Databend
45次阅读
2025-04-11 16:31:01
Oracle备份恢复之用户管理模式下的冷备
hongg
43次阅读
2025-04-10 08:59:14
MySQL的复制
@天行健
38次阅读
2025-04-09 11:02:22
日常运维怎么破?GBase 8a这些命令用得到!
GBASE数据库
36次阅读
2025-04-11 16:29:49
【干货攻略】达梦数据库日志监控与分析(二)
达梦E学
36次阅读
2025-04-07 09:46:37
为 Kubernetes 提供智能的 LLM 推理路由:Gateway API Inference Extension 深度解析
Se7en的架构笔记
34次阅读
2025-04-11 15:34:29
INFINI Console 系统集群状态异常修复方案
极限实验室
33次阅读
2025-04-20 22:56:48
dg库拉起脚本
www
33次阅读
2025-04-02 10:04:35
k8s之代理service实现
IT那活儿
30次阅读
2025-04-21 10:35:05
MySQL源码学习系列(二)--面试高频问题:general log、slowlog记录顺序
数据库干货铺
27次阅读
2025-03-31 10:00:45