-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Component(s)
exporter/exporterhelper
Is your feature request related to a problem? Please describe.
When troubleshooting exporter performance bottlenecks, there is no way to observe how many export requests are concurrently in-flight per exporter. The existing metrics (otelcol_exporter_sent_*, otelcol_exporter_send_failed_*) are cumulative counters, and otelcol_exporter_queue_size only reflects the queue depth — not how many requests are actively being sent to the downstream backend.
This makes it difficult to determine whether a specific exporter is saturating its concurrent worker pool (e.g., all num_consumers slots are occupied) or if a slow backend is causing requests to pile up at the send layer. While it is technically possible to derive in-flight counts from trace data — by computing the overlap of active exporter/* spans — this requires additional trace storage, querying infrastructure, and non-trivial computation, making it impractical for real-time monitoring and alerting.
Describe the solution you'd like
Add a new metric otelcol_exporter_in_flight_requests as an Int64UpDownCounter (non-monotonic sum) that tracks the number of export requests currently being executed.
- Instrument:
startOp()increments by +1,endOp()decrements by -1 inobsReportSender - Unit:
{request} - Attributes:
exporter(component ID), consistent with existing exporter metrics - Stability:
development
This metric would allow operators to:
- Detect when an exporter is at its concurrency limit (
in_flight_requests == num_consumers) - Identify slow backends causing request buildup
- Set alerts on sustained high in-flight counts before queue overflow occurs
- Compare in-flight levels across exporters to pinpoint bottlenecks
The implementation is minimal — it only touches obsReportSender.startOp()/endOp() in exporter/exporterhelper/internal/obs_report_sender.go.
Describe alternatives you've considered
No response
Additional context
No response
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.