TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
Observability / Operations

Control Fluent Bit Backpressure With Prometheus

Fluent Bit’s control mechanisms, key metrics to monitor and how to configure alerts to maintain a healthy, high-performance observability pipeline.
Oct 16th, 2025 11:00am by
Featued image for: Control Fluent Bit Backpressure With Prometheus
Image from tridland on Shutterstock.

Fluent Bit is one of the most widely used open source data collection agents for logs, metrics and traces. It’s lightweight, high-performance and easily extensible, making it ideal for modern observability pipelines.

At its core, Fluent Bit is a simple data pipeline comprising various stages, as illustrated in the diagram below.

However, even the most efficient pipeline can hit a bottleneck known as backpressure, which occurs when data is ingested at a rate that exceeds the system’s ability to process and flush it. Backpressure causes problems such as high memory usage, service downtime and data loss.

Let’s explore how to monitor and alert on backpressure in Fluent Bit, enabling you to maintain a healthy logging pipeline.

Prerequisites

  • Docker: Installed on your system.
  • Elasticsearch: We will send logs to Elasticsearch. To follow along, refer to this guide.
  • Familiarity with Fluent Bit concepts: Such as inputs, outputs, parsers and filters. If you’re unfamiliar with these concepts, please refer to the official documentation.

Understanding Backpressure in Fluent Bit

In high-throughput logging pipelines, Fluent Bit ingests data faster than downstream outputs (HTTP endpoints, databases, storage backends) can accept it. This mismatch between the input rate and output rate gives rise to backpressure, a condition in which buffers grow, memory consumption increases and input must be throttled or paused.

Backpressure Example

A classic example is reading from large log files (or having big backlogs) and trying to dispatch events to a backend over the network. If the backend is slow or unavailable, buffered data accumulates in Fluent Bit.

If unchecked, backpressure can lead to excessive memory usage, performance degradation or even data loss.

Mechanisms To Control Backpressure

Fluent Bit implements several controls to limit how much data an input plugin can feed into the pipeline under strain:

Control When Applicable Behavior / Effect
Mem_Buf_Limit Only when storage.type is memory (the default) Sets an upper bound for how much in-memory data can be queued. When memory usage exceeds this limit, Fluent Bit triggers a pause callback on the input, preventing new data from being ingested until the buffer is drained.
storage.max_chunks_up When using storage.type filesystem or in hybrid (memory + filesystem) mode Controls how many memory “chunks” can be held before transitions or limits are enforced. Once the limit is reached, Fluent Bit may stop buffering new data in memory and switch to filesystem-only buffering (if enabled).

For more information about backpressure in Fluent Bit, refer to the official documentation.

Key Metrics To Monitor for Fluent Bit Backpressure

Fluent Bit exposes its internal state using Prometheus metrics, which are essential for Fluent Bit monitoring and alerting on backpressure. The following table provides an overview of the essential metrics to monitor for detecting and troubleshooting backpressure in Fluent Bit:

Metric Name Description Backpressure Relevance
Input Metrics
fluentbit_input_ingestion_paused When inputs are paused, this metric sets to 1 Detect when inputs are paused due to backpressure
fluentbit_input_storage_overlimit When inputs are over storage limits, this metric sets to 1 Detect when inputs are over storage limits
Chunk Metrics
fluentbit_storage_fs_chunks_up Number of chunks in memory High values indicate memory pressure
fluentbit_storage_fs_chunks_down Number of chunks on disk only Shows filesystem buffering activity
Throughput Metrics
fluentbit_input_records_total Total records processed by input Baseline for input throughput
fluentbit_output_proc_records_total Total records processed by output Baseline for output throughput
Error Metrics
fluentbit_output_errors_total Total delivery errors by output Indicates destination issues
fluentbit_output_retries_total Total retry attempts by output Shows temporary delivery issues
fluentbit_output_retries_failed_total Total failed retry attempts Indicates permanent delivery failures

For more information on these metrics, refer to the official documentation.

Setting up a Backpressure Scenario

To understand how backpressure works in practice, let’s set up a scenario that allows us to observe it in action.

Here’s an explanation of each component:

1. High-Volume Input

This represents a source generating logs at a high rate. For testing purposes, you could use the tail input plugin reading from a file that’s being rapidly written to, or the dummy input plugin is configured to generate messages at a high rate.

2. Fluent Bit Processing

Fluent Bit receives these logs and processes them according to its configuration:

  • Input plugins collect the logs and create chunks.
  • The internal buffer holds chunks until they can be delivered.
  • Output plugins attempt to send the data to destinations.

3. Elasticsearch

This represents a destination that can’t keep up with the input rate. To control the number of requests processed by Elasticsearch, we will add a proxy server between Fluent Bit and Elasticsearch and configure the proxy with rate limiting.

4. Metrics Collection and Visualization

As backpressure develops:

This monitoring setup enables us to observe backpressure as it occurs and understand its causes and effects.

Let’s see the above setup in action.

Instructions

1. Clone the Repository

Start by cloning the repository that contains the necessary configuration files.

2. Start Elasticsearch (Optional)

We will run Elasticsearch in a Docker container. If you already have ElasticSearch running, you can skip this step.


It will take a couple of minutes to set up Elasticsearch and Kibana. The default username and password are elastic and rslglTS4.

3. Modify Configuration Files

  • In the nginx/nginx.conf file, replace http://<your-ip-addr>:9200 with your Elasticsearch host and port. Note: If you are running Elasticsearch locally in a Docker container as described above, use the public IPv4 address assigned to your machine instead of localhost.
  • In the fluent-bit/config/fluent-bit.yaml output section, replace <your-username> and <your-password> with your Elasticsearch authentication credentials.

4. Fluent Bit Configuration

You can find the Fluent Bit configuration here.

Inputs

The dummy input plugin is used to generate artificial log events at a high rate:

  • rate: 350: Produces 350 log records per second. This ensures the pipeline is stressed enough to trigger backpressure.
  • samples: -1: Runs indefinitely, so the log generation doesn’t stop.
  • mem_buf_limit: 2M: Sets a very small memory buffer limit of just 2MB. Since each log event is enriched later with additional fields, this buffer fills quickly, which helps simulate a backpressure scenario.

Filters

The Lua filter simulates processing overhead and inflates each record:

  • A small loop (for i=1,1000) introduces CPU work, mimicking real-world processing delays.
  • New fields are added (hostname, environment), and a data field containing 1KB of repeated characters is injected. This increases the payload size for each log.

Together, the extra CPU load and message size increase the stress on the pipeline. The filter ensures that Fluent Bit consumes resources while handling logs, not just passing them through.

Outputs

The Elasticsearch output plugin sends logs to an external system; however, in this demo, it’s intentionally routed through an NGINX proxy instead of being sent directly to Elasticsearch.

  • host: nginx-proxy / port: 9000: The NGINX proxy is configured with rate-limiting rules. This acts as a bottleneck, slowing down Fluent Bit’s ability to offload logs.
  • The result is that Fluent Bit starts buffering records, eventually hitting memory limits and demonstrating backpressure in action.

In a real-world environment, backpressure may occur when Elasticsearch (or another storage system) slows down due to a heavy indexing load. Here, the NGINX proxy is used to mimic that controlled slowdown.

5. Start the Services

Start all services using the command below. Note: Ensure Elasticsearch is up and running.

docker-compose up -d

Wait for a few moments to allow all services to initialize correctly.

6. Access Grafana

Open your web browser and navigate to http://localhost:3000. Log in with the default credentials (admin/admin) and skip the new password generation step when prompted.

7. Import the Dashboard

In Grafana, import the provided dashboard JSON file to visualize Fluent Bit metrics.

  • Go to the Dashboards section from the left sidebar.
  • Under the New dropdown, select Import.
  • Upload the dashboard.json file from the repository.

8. Monitor Metrics

After importing the dashboard, wait for a couple of minutes for the data to populate.

Backpressure propagates from Output to Input. You will start seeing an increase in the number of retried and dropped records from the output plugin (Elasticsearch).

You will also see the difference between the number of input records (yellow) processed vs. the output records (green).

Going to the next stage, when backpressure occurs, the Input plugin memory buffers (mem_buf_limit) become full, which causes the Fluent Bit to pause (fluentbit_input_ingestion_paused set to 1) ingesting new records. Input getting paused is a clear indication of backpressure.

9. Clean Up

The command below will stop all services.


If you have started local Elasticsearch, run this command.

Setting up Fluent Bit Alerts on Backpressure

Configuring Fluent Bit alerts ensures you’re notified the moment backpressure begins to affect your data pipeline:

1. Input Paused Alert

  • Condition: fluentbit_input_ingestion_paused > 0
  • Evaluation: Every 1m for 2m
  • Notification message: “Input {{$labels.name}} is paused due to backpressure”

2. Output Errors Alert

  • Condition: rate(fluentbit_output_errors_total[5m]) > 1
  • Evaluation: Every 1m for 5m
  • Notification message: “Output {{$labels.name}} experiencing errors at rate {{$value}}/s”

Conclusion

Monitoring and alerting on backpressure in Fluent Bit is essential for maintaining a healthy logging pipeline. By understanding the backpressure mechanisms, configuring appropriate limits and setting up monitoring and alerts, you can ensure that your Fluent Bit deployment handles high volumes of data efficiently and reliably.

The key takeaways are:

  1. Configure appropriate memory and storage limits for your input plugins.
  2. Monitor the key metrics related to backpressure.
  3. Set up alerts to be notified when backpressure occurs.
  4. Use the visualizations to understand the behavior of your Fluent Bit deployment.

By following these guidelines, you can effectively manage backpressure in Fluent Bit and ensure that your logging infrastructure remains robust and reliable.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Bit, Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.