How to Use Filebeat

How to Use Filebeat Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on your systems, Filebeat plays a critical role in modern observability and monitoring architectures. Whether you're managing a single server or a distributed microser

Oct 30, 2025 - 12:34
Oct 30, 2025 - 12:34
 0

How to Use Filebeat

Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on your systems, Filebeat plays a critical role in modern observability and monitoring architectures. Whether you're managing a single server or a distributed microservices environment, Filebeat ensures that your logs are reliably delivered to destinations such as Elasticsearch, Logstash, or even Kafka for further processing and analysis.

Unlike heavier log collectors, Filebeat runs as a lightweight agent with minimal resource consumption, making it ideal for deployment across hundreds or thousands of hosts. It reads log files line by line, tracks file positions using a registry file to avoid duplication, and supports multiple output destinations with built-in reliability features like backpressure handling and retry mechanisms.

In todays DevOps and cloud-native environments, centralized logging is not optionalits essential. Without a robust log collection system, troubleshooting issues becomes a guessing game. Filebeat bridges the gap between your applications and your monitoring platform, transforming raw log files into structured, searchable data. This tutorial will guide you through every step of installing, configuring, and optimizing Filebeat for production use, along with best practices, real-world examples, and essential tools to maximize its potential.

Step-by-Step Guide

Prerequisites

Before installing Filebeat, ensure your system meets the following requirements:

  • A supported operating system: Linux (Ubuntu, CentOS, Debian), macOS, or Windows
  • Root or sudo privileges for installation
  • Access to your desired output destination (Elasticsearch, Logstash, or Kafka)
  • Basic familiarity with command-line interfaces and YAML configuration files

Filebeat is compatible with most modern Linux distributions and Windows Server versions. Ensure your system has a stable internet connection to download the package, and verify that any firewalls or security groups allow outbound connections to your chosen output.

Step 1: Download and Install Filebeat

The installation process varies slightly depending on your operating system. Below are the most common methods:

Linux (Ubuntu/Debian)

First, import the Elastic GPG key to verify package authenticity:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add the Elastic repository:

echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

Update your package list and install Filebeat:

sudo apt-get update && sudo apt-get install filebeat

Linux (CentOS/RHEL)

Import the GPG key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a repository file:

sudo tee /etc/yum.repos.d/elastic-8.x.repo [elastic-8.x]

name=Elastic repository for 8.x packages

baseurl=https://artifacts.elastic.co/packages/8.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

Install Filebeat:

sudo yum install filebeat

macOS

Using Homebrew:

brew tap elastic/tap

brew install elastic/tap/filebeat

Windows

Download the latest Filebeat Windows ZIP file from the official downloads page. Extract the contents to a directory such as C:\Program Files\Filebeat. Open PowerShell as Administrator and navigate to the extracted folder:

cd 'C:\Program Files\Filebeat'

Install Filebeat as a Windows service:

.\install-service-filebeat.ps1

Step 2: Configure Filebeat

Filebeats configuration file is located at:

  • Linux/macOS: /etc/filebeat/filebeat.yml
  • Windows: C:\Program Files\Filebeat\filebeat.yml

Before editing, create a backup:

sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

Open the configuration file in your preferred editor:

sudo nano /etc/filebeat/filebeat.yml

Basic Configuration: Input Section

The filebeat.inputs section defines which files Filebeat should monitor. Heres a minimal example to monitor a single Nginx access log:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/nginx/access.log

Use filestream (recommended for Filebeat 7.15+) instead of the legacy log type. The filestream input provides better performance and reliability.

To monitor multiple log files, use wildcards:

paths:

- /var/log/nginx/*.log

- /var/log/apache2/*.log

For recursive directory scanning, use:

paths:

- /var/log/**/*.log

Advanced Input Options

You can enhance input behavior with additional settings:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/nginx/access.log

tags: ["nginx", "web-server"]

fields:

environment: production

service: frontend

encoding: utf-8

ignore_older: 24h

scan_frequency: 10s

harvester_buffer_size: 16384

max_bytes: 10485760

  • tags: Add custom labels to events for easier filtering in Kibana.
  • fields: Add static key-value pairs to all events from this input.
  • ignore_older: Skip files not modified in the specified duration.
  • scan_frequency: How often Filebeat checks for new files (default: 10s).
  • harvester_buffer_size: Size of buffer used to read each file (default: 16KB).
  • max_bytes: Maximum bytes read per log line (prevents memory issues with huge lines).

Output Configuration

Filebeat can send data directly to Elasticsearch or via Logstash for preprocessing. Below are two common configurations.

Option A: Direct to Elasticsearch

Uncomment and modify the Elasticsearch output section:

output.elasticsearch:

hosts: ["http://localhost:9200"]

username: "filebeat_internal"

password: "your_secure_password"

index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

ssl.enabled: true

ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Ensure Elasticsearch is accessible and the provided credentials have the necessary permissions. The index pattern uses the agent version and date for time-based rotation.

Option B: Via Logstash

If youre using Logstash for parsing (e.g., grok filters), configure Filebeat to send to Logstash:

output.logstash:

hosts: ["logstash.example.com:5044"]

ssl.enabled: true

ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-beats.crt"]

Ensure Logstash is configured to listen on port 5044 with the beats input plugin enabled:

input {

beats {

port => 5044

}

}

Step 3: Enable Modules (Optional but Recommended)

Filebeat includes pre-built modules for common services like Nginx, Apache, MySQL, and system logs. These modules simplify configuration by providing predefined log paths, parsers, and Kibana dashboards.

To see available modules:

filebeat modules list

To enable the Nginx module:

sudo filebeat modules enable nginx

This automatically configures the input and creates a module-specific configuration file at /etc/filebeat/modules.d/nginx.yml. You may need to adjust the log paths in that file to match your system:

- module: nginx

access:

enabled: true

var.paths:

- /var/log/nginx/access.log*

error:

enabled: true

var.paths:

- /var/log/nginx/error.log*

After enabling modules, reload the configuration:

sudo filebeat setup

Step 4: Test Configuration

Before starting Filebeat, validate your configuration to avoid startup failures:

filebeat test config

Test connectivity to your output:

filebeat test output

If both tests pass, youre ready to start the service.

Step 5: Start and Enable Filebeat

Linux (systemd)

sudo systemctl start filebeat

sudo systemctl enable filebeat

macOS

brew services start filebeat

Windows

Start-Service filebeat

Set-Service -Name filebeat -StartupType Automatic

Step 6: Verify Data Flow

Once Filebeat is running, verify that logs are being received at your destination.

In Elasticsearch:

curl -X GET "localhost:9200/_cat/indices?v"

Look for indices matching your configured pattern (e.g., filebeat-*).

In Kibana:

Go to Stack Management > Index Patterns and create an index pattern using filebeat-*. Then navigate to Discover to view live log entries.

If you used modules, visit Observability > Logs to see pre-built dashboards for Nginx, Apache, etc.

Step 7: Monitor Filebeat Health

Filebeat exposes internal metrics via its HTTP endpoint. Enable it in the config:

monitoring.enabled: true

monitoring.elasticsearch:

hosts: ["http://localhost:9200"]

Restart Filebeat and access metrics at:

http://localhost:5066/debug/vars

Key metrics to monitor:

  • harvester_open_files: Number of files currently being read.
  • registry_entries: Number of tracked files in the registry.
  • output.events.acked: Number of events successfully delivered.
  • output.events.failed: Events that failed to send (indicates output issues).

Best Practices

Use Filestream Over Log Input

Filebeat 7.15+ introduced the filestream input type, which replaces the legacy log input. filestream offers improved performance, better file handling, and more accurate tracking of file changes. Always use filestream for new deployments.

Avoid Monitoring Large or Rapidly Rotating Logs

Filebeat is optimized for structured, text-based logs. Avoid using it to monitor binary files, large databases, or logs that rotate every few seconds. For high-frequency logs, consider using a dedicated log aggregation tool or buffer (e.g., Kafka) between Filebeat and Elasticsearch.

Use Index Lifecycle Management (ILM)

Configure ILM in Elasticsearch to automatically roll over and delete old indices. This prevents unbounded disk usage. Filebeat can auto-configure ILM if you set:

output.elasticsearch:

ilm.enabled: true

ilm.rollover_alias: "filebeat"

ilm.pattern: "{now/d}-000001"

Run filebeat setup --ilm-policy to create the default ILM policy.

Secure Your Configuration

Never hardcode passwords or API keys in plain text. Use environment variables or secret management tools:

output.elasticsearch:

hosts: ["https://elasticsearch.example.com:9200"]

username: "${ELASTIC_USERNAME}"

password: "${ELASTIC_PASSWORD}"

ssl.certificate_authorities: ["/etc/pki/tls/certs/ca.crt"]

Set environment variables before starting Filebeat:

export ELASTIC_USERNAME=filebeat

export ELASTIC_PASSWORD=your_secure_password

Enable Backpressure Handling

Filebeat uses internal queues to buffer events when the output is slow. Configure queue sizes to avoid memory exhaustion:

queue.mem:

events: 4096

flush.min_events: 1024

flush.timeout: 5s

For high-throughput environments, consider using a persistent queue:

queue.disk:

enabled: true

path: /var/lib/filebeat/queue

max_size: 10GB

Persistent queues survive restarts and prevent data loss during outages.

Tag and Enrich Events

Use tags and fields to add context. This makes filtering and aggregation in Kibana much more efficient:

fields:

host_group: web-tier

region: us-east-1

app_version: 2.1.4

Combine with dynamic fields using processors (see below).

Use Processors for Preprocessing

Filebeat includes powerful processors to modify events before sending. Examples:

processors:

- add_host_metadata:

when.not.contains.tags: forwarded

- add_cloud_metadata: ~

- drop_fields:

fields: ["agent.ephemeral_id", "input.type"]

- rename:

fields:

- from: "message"

to: "log.message"

Processors can:

  • Enrich logs with host or cloud metadata
  • Remove sensitive fields
  • Rename fields for consistency
  • Conditionally modify events based on content

Log Rotation Compatibility

Filebeat handles log rotation automatically. Ensure your log rotation tool (e.g., logrotate) uses the copytruncate option or signals Filebeat via kill -USR1 if using create mode. The registry file tracks file inodes, so renaming files (as with create) may cause duplication unless Filebeat is notified.

Monitor Filebeat Itself

Set up alerts for Filebeat health. Monitor:

  • Service status (is Filebeat running?)
  • Output failures (high output.events.failed)
  • Registry growth (indicates file tracking issues)
  • Memory and CPU usage (should remain low)

Use Prometheus + Grafana or Elasticsearchs built-in monitoring to visualize Filebeat metrics.

Tools and Resources

Official Documentation

Always refer to the official Filebeat documentation for the latest features and compatibility:

Community Modules and Templates

GitHub hosts numerous community-contributed Filebeat configurations:

Containerized Deployments

For Docker and Kubernetes environments, use the official Elastic Filebeat Docker image:

docker pull docker.elastic.co/beats/filebeat:8.12.0

Example Kubernetes manifest:

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: filebeat

spec:

selector:

matchLabels:

app: filebeat

template:

metadata:

labels:

app: filebeat

spec:

containers:

- name: filebeat

image: docker.elastic.co/beats/filebeat:8.12.0

volumeMounts:

- name: varlog

mountPath: /var/log

- name: varlibdockercontainers

mountPath: /var/lib/docker/containers

readOnly: true

volumes:

- name: varlog

hostPath:

path: /var/log

- name: varlibdockercontainers

hostPath:

path: /var/lib/docker/containers

Validation and Debugging Tools

  • filebeat test config Validates YAML syntax
  • filebeat test output Checks connectivity to destination
  • filebeat -e -d "*" Runs Filebeat in foreground with debug logging
  • tail -f /var/log/filebeat/filebeat Monitor Filebeat logs (Linux)
  • Kibana Dev Tools Query Elasticsearch to verify data ingestion

Performance Tuning Tools

  • htop or top Monitor Filebeat memory and CPU usage
  • iotop Check disk I/O caused by log reading
  • netstat -tuln | grep 5044 Verify Logstash port is listening
  • curl -X GET "localhost:5066/debug/vars" Access internal metrics

Real Examples

Example 1: Centralized Nginx Logging Across 50 Servers

Scenario: You manage 50 web servers running Nginx. You want to collect access and error logs into Elasticsearch and visualize them in Kibana.

Configuration:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/nginx/access.log*

- /var/log/nginx/error.log*

tags: ["nginx", "web-server"]

fields:

environment: production

role: web

encoding: utf-8

ignore_older: 24h

processors:

- add_host_metadata:

when.not.contains.tags: forwarded

- rename:

fields:

- from: "message"

to: "log.message"

output.elasticsearch:

hosts: ["https://elasticsearch-prod.example.com:9200"]

username: "filebeat_writer"

password: "${ELASTIC_PASSWORD}"

index: "filebeat-web-%{[agent.version]}-%{+yyyy.MM.dd}"

ilm.enabled: true

ilm.rollover_alias: "filebeat-web"

ilm.pattern: "{now/d}-000001"

ssl.certificate_authorities: ["/etc/pki/tls/certs/ca.crt"]

Deployment:

  • Deploy this config via Ansible or Puppet across all 50 servers
  • Use a load-balanced Elasticsearch cluster for scalability
  • Enable ILM to auto-delete logs older than 90 days
  • Create a Kibana dashboard showing top 10 IP addresses, HTTP status codes, and response times

Example 2: Kubernetes Pod Logs with Filebeat DaemonSet

Scenario: You run a Kubernetes cluster and want to collect logs from all pods without modifying applications.

Configuration:

filebeat.inputs:

- type: container

enabled: true

paths:

- /var/log/containers/*.log

processors:

- add_kubernetes_metadata:

host: ${NODE_NAME}

matchers:

- logs_path:

logs_path: "/var/log/containers/"

This configuration uses Filebeats container input to automatically detect and parse Docker container logs. The add_kubernetes_metadata processor enriches logs with pod name, namespace, labels, and annotations.

Result: In Kibana, you can filter logs by namespace, pod, or container name, making debugging microservices far more efficient.

Example 3: Filtering Sensitive Data Before Transmission

Scenario: Your application logs contain PII (personally identifiable information) like email addresses. You must remove them before sending logs to Elasticsearch for compliance.

Configuration:

processors:

- drop_fields:

fields: ["user.email", "user.phone"]

- regexp:

field: message

pattern: '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'

replace: '[REDACTED_EMAIL]'

This ensures compliance with GDPR or HIPAA without requiring application-level changes.

Example 4: Multi-Tier Log Routing

Scenario: You have three environments development, staging, and production. You want to route logs to separate Elasticsearch indices.

Configuration:

fields:

environment: production

processors:

- add_fields:

target: ''

fields:

log_type: "application"

when:

equals:

fields.environment: "production"

output.elasticsearch:

hosts: ["https://elasticsearch.example.com:9200"]

index: "filebeat-%{[fields.environment]}-%{[fields.log_type]}-%{[agent.version]}-%{+yyyy.MM.dd}"

Each environment uses a different config file with a different fields.environment value. This results in indices like filebeat-production-application-8.12.0-2024.06.15, enabling clean separation and access control.

FAQs

Is Filebeat better than Logstash for log collection?

Filebeat is optimized for lightweight, reliable log shipping, while Logstash is designed for heavy data processing (filtering, parsing, enrichment). Use Filebeat to collect logs and send them to Logstash if you need complex transformations. For simple forwarding, Filebeat alone is sufficient and more efficient.

Can Filebeat send logs to cloud services like AWS CloudWatch?

Filebeat does not natively support AWS CloudWatch Logs. Use Filebeat to send logs to Logstash or Kafka, then use a custom script or AWS Lambda to forward them to CloudWatch. Alternatively, use the AWS CloudWatch Agent directly on your hosts.

What happens if Filebeat crashes or restarts?

Filebeat uses a registry file (/var/lib/filebeat/registry) to track the last read position of each file. Upon restart, it resumes from where it left off, avoiding duplication or data lossprovided the log files havent been rotated or deleted.

How much memory does Filebeat use?

Filebeat typically uses less than 100 MB of RAM per instance, even when monitoring dozens of log files. Memory usage scales with the number of open files and queue size. Use persistent queues and limit the number of concurrent harvesters if memory is constrained.

Can I use Filebeat to monitor Windows Event Logs?

Yes, Filebeat supports Windows Event Logs via the wineventlog input. Configure it to collect Application, System, and Security logs:

filebeat.inputs:

- type: wineventlog

enabled: true

event_logs:

- name: Application

tags: ["windows", "application"]

- name: System

tags: ["windows", "system"]

How do I update Filebeat without losing configuration?

Always back up your filebeat.yml before updating. New versions may introduce breaking changes in configuration syntax. Check the compatibility guide before upgrading. Use package managers (apt, yum) to handle updates cleanly.

Why are my logs appearing in Kibana with timestamps in the future?

This usually occurs when the system clock on the Filebeat host is out of sync. Use NTP (Network Time Protocol) to synchronize time across all hosts. Misaligned timestamps make log analysis unreliable and can break alerting rules.

Does Filebeat support compression?

Yes, Filebeat supports gzip compression when sending to Logstash or Elasticsearch. Enable it in the output config:

output.logstash:

hosts: ["logstash.example.com:5044"]

compression_level: 6

Conclusion

Filebeat is an indispensable tool for modern log management. Its lightweight design, reliability, and deep integration with the Elastic Stack make it the go-to solution for collecting and forwarding log data across heterogeneous environmentsfrom single servers to sprawling Kubernetes clusters. By following the steps outlined in this guide, youve learned how to install, configure, and optimize Filebeat for production use, while adhering to industry best practices for security, performance, and scalability.

Remember: Effective logging is not just about collecting dataits about making that data actionable. Filebeat ensures your logs are delivered accurately and efficiently, enabling faster troubleshooting, better observability, and stronger security posture. Whether youre monitoring web servers, databases, or microservices, Filebeat provides the foundation for a robust, centralized logging pipeline.

As your infrastructure evolves, continue to refine your Filebeat configurations. Leverage processors for enrichment, use persistent queues for resilience, and monitor Filebeats health proactively. With the right setup, Filebeat becomes more than a log shipperit becomes a critical component of your operational excellence strategy.