No results for ""
EXPAND ALL
  • Home
  • API docs

GIVE DOCS FEEDBACK

Sending OpenTelemetry traces to LaunchDarkly

Read time: 6 minutes
Last edited: May 02, 2024
This feature is for Early Access Program customers only

Sending OpenTelemetry traces to LaunchDarkly is only available to members of LaunchDarkly's Early Access Program (EAP). If you want access to this feature, join the EAP.

Enabling OpenTelemetry traces in your server-side SDKs is available to all customers. To learn more, read OpenTelemetry.

Overview

This topic explains how to send OpenTelemetry traces to LaunchDarkly. LaunchDarkly converts this data into events that LaunchDarkly metrics track over time. Experimentation and guarded releases use these metrics to measure flag performance.

This topic covers:

  • how to send trace data to LaunchDarkly
  • how to configure your collector to receive, process, and export telemetry data
  • how to create metrics from OpenTelemetry trace data

About OpenTelemetry

OpenTelemetry (OTel) is an open source observability framework and toolkit designed to create and manage telemetry data such as traces, metrics, and logs. You can use LaunchDarkly's OpenTelemetry protocol (OTLP) endpoint to send OpenTelemetry traces to LaunchDarkly.

LaunchDarkly converts the information contained in these traces to produce events for use with Experimentation and guarded releases. Typically the information you collect in the traces is related to API performance and errors.

Because OpenTelemetry is vendor- and tool-agnostic, you can reuse the instrumentation that may already exist in your code to create LaunchDarkly metrics without changing application code.

LaunchDarkly supports receiving trace data over gRPC, HTTP/protobuf, and HTTP/JSON.

Send trace data to LaunchDarkly

To send OpenTelemetry trace data to LaunchDarkly, we recommend using the OpenTelemetry Collector. The OpenTelemetry Collector is a vendor-agnostic proxy that can receive, process, and export telemetry data. You can configure an OpenTelemetry pipeline with a processor to filter out data that is not relevant to LaunchDarkly, reducing data transport costs, and to send data to LaunchDarkly in addition to other OTLP backends that you may already be using.

LaunchDarkly processes two types of data from your OpenTelemetry traces:

  • HTTP span attributes, including latency, 5xx occurrences, and other errors, where the span has or overlaps with another span that has at least one feature flag span event
  • exception span events that occur in the wake of a feature flag span event

A feature flag span event is defined as any span event that contains a feature_flag.context.key attribute. LaunchDarkly ignores traces that do not include span events with this attribute.

To ensure your spans have compatible feature flag events, configure your SDK to use the OpenTelemetry tracing hook. This hook automatically attaches feature flag event data to your OTel traces for you.

Try it in your SDK: OpenTelemetry

Collector configuration

Collector pipelines consists of three components: receivers, exporters, and processors. Because the exporter is sending data to LaunchDarkly, it requires a LaunchDarkly access token. To learn how to create an access token, read API access tokens.

To configure a pipeline to send trace data to LaunchDarkly, modify the Collector's config file, otel-collector-config.yaml, as follows:

# The receivers specify how the Collector receives data.
# In this example, it receives data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
receivers:
otlp:
protocols:
grpc:
http:
# The exporters specify how the Collector sends data.
# In this example, it sends data to LaunchDarkly using HTTP.
exporters:
otlphttp/launchdarkly:
traces_endpoint: https://events.launchdarkly.com/otlp/traces
headers:
Authorization: ${env:LD_ACCESS_TOKEN}
X-LaunchDarkly-Project: example-project-key
X-LaunchDarkly-Environment: example-environment-key
# The processors specify how the Collector processes the trace data.
# In this example, it groups spans that belong to the same trace,
# and only passes along span events related to feature flags or exceptions.
processors:
# The groupbytrace processor groups all spans that belong to the same trace together.
# This is required to ensure LaunchDarkly receives a complete trace in a single request.
groupbytrace:
wait_duration: 10s
# Remove all span events that are not "feature_flag" or "exception"
filter/launchdarkly-spanevents:
error_mode: ignore
traces:
spanevent:
- 'not (name == "feature_flag" or name == "exception")'
# Remove all spans that do not have an HTTP route or any span events remaining
# after the previous filter has been applied
filter/launchdarkly-spans:
error_mode: ignore
traces:
span:
- 'not (attributes["http.route"] != nil or Len(events) > 0)'
batch:
extensions:
health_check:
service:
pipelines:
# Add a new pipeline to send data to LaunchDarkly
traces/ld:
receivers: [otlp]
processors:
[
filter/launchdarkly-spanevents,
filter/launchdarkly-spans,
groupbytrace,
batch,
]
exporters: [otlphttp/launchdarkly]

Create LaunchDarkly metrics from OpenTelemetry trace data

When LaunchDarkly receives OpenTelemetry trace data, it processes and converts this data into events that LaunchDarkly metrics track over time. Experimentation and guarded releases use these metrics to measure flag performance.

There are two types of events that LaunchDarkly creates from OpenTelemetry traces: route-specific events and global events. Route-specific events are useful when you are experimenting with a change that is known to impact a small subset of your server's HTTP routes. Global events are useful when you believe your change may impact all routes, or when you are not sure of the impact of your change.

Use the following table to create LaunchDarkly metrics from the events that LaunchDarkly produces from your OpenTelemetry trace data. To learn more about LaunchDarkly metrics, read Metrics.

Event typeLaunchDarkly metric propertiesEvent name template and examples
per-route HTTP request latency
  • Event kind: custom numeric
  • Unit of measure: ms
  • Success criteria: lower
http.latency;method={http.request.method},route={http.route}

Examples:
http.latency;method=GET,route=/api/v2/flags
http.latency;method=PATCH,route=/api/v2/flags/{id}
global HTTP request latency
  • Event kind: custom numeric
  • Unit of measure: ms
  • Success criteria: lower
otel.http.latency

Example:
otel.http.latency
per-route HTTP request errors
  • Event kind: custom conversion
  • Success criteria: lower
http.errors;method={http.request.method},route={http.route}

Examples:
http.errors;method=GET,route=/api/v2/flags
http.errors;method=PATCH,route=/api/v2/flags/{id}
global exceptions
  • Event kind: custom conversion
  • Success criteria: lower
otel.exception

Example:
otel.exception
per-route HTTP 5xxs
  • Event kind: custom conversion
  • Success criteria: lower
http.5XX;method={http.request.method},route={http.route}

Examples:
http.5XX;method=GET,route=/api/v2/flags
http.5XX;method=PATCH,route=/api/v2/flags/{id}

LaunchDarkly supports both the 1.20.0 and 1.23.0 versions of the OpenTelemetry semantic conventions, so you can use http.request and http.status_code instead of http.request.method and http.response.status_code if you are using 1.20.0. To learn more, read OpenTelemetry's HTTP semantic convention migration guide.

When you create a per-route metric, you must provide the http.request.method and http.route for each metric in the string templates specified in the table above. It is important that the method and route match the corresponding HTTP semantic convention attributes in the spans you are sending to LaunchDarkly.