Read time: 8 minutes
Last edited: Feb 21, 2024
Release guardian is only available to members of LaunchDarkly's Early Access Program (EAP). If you want access to this feature, join the EAP wait list.
This topic explains how to observe metrics on flag changes and configure LaunchDarkly to take action on the results. You can configure LaunchDarkly to notify you or automatically roll back your flag changes when it detects metric regressions on a release.
Metrics measure end-user and system behaviors affected by flag variations. You can use metrics to track a variety of system health indicators and end-user behaviors, from engineering metrics like errors and latencies, to product metrics like clicks and conversions.
You can connect metrics to LaunchDarkly using a LaunchDarkly SDK, the metric import API, the Segment integration, or the Sentry integration. To learn more, read Metrics.
When you toggle on a flag or otherwise begin serving the "true" variation, monitoring metrics lets you know whether the variation change is having any negative impact on your app or audience. This negative effect is called a "regression." You can configure LaunchDarkly to either notify you of a regression, or notify you and automatically roll back the flag change.
To monitor metrics on a flag, your flag must meet the following prerequisites:
- The flag must be a boolean flag
- The flag must have at least one rule serving "true"
You cannot monitor metrics on string, number, or JSON flags, and you cannot monitor metrics for rules that serve "false" or that serve a percentage rollout.
Before you can add metrics to flag, you must create the metrics you want to monitor. To learn how, read Creating metrics.
You must also decide on a randomization unit. A randomization unit is the context kind that LaunchDarkly uses to assign traffic to each of a flag's variations. You can only add metrics to the flag that use the randomization unit you select. To learn more, read Randomization units.
To add metrics to an existing flag:
- From the flags list, click on the flag you want to add a metric to.
- Click Add metrics.
- Select a Randomization unit.
- Select one or more Metrics to monitor.
- Click Save.
When you toggle on flag targeting, you can begin monitoring your metrics.
If you add a metric to a flag that already has targeting toggled on, you will not be prompted to begin monitoring until you toggle the flag off then on again, or until you change the variation served from "false" to "true."
LaunchDarkly only monitors metrics for flags that have targeting toggled on. You must choose a single flag rule to monitor, such as a targeting rule or the default rule. The rule must serve "true," and the rule cannot serve a percentage rollout.
To begin monitoring metrics on a flag:
After you have added metrics to a flag, toggle flag targeting On.
Click Monitor and save. The "Create a release strategy" dialog appears.
Choose a Rule to monitor. If you do not have any targeting rules on the flag, or only have targeting rules that serve a percentage rollout, the only option will be the default rule. You cannot monitor rules that serve a percentage rollout or rules that serve the "false" variation.
Choose a monitoring Duration of 2 days, 24 hours, or one hour.
- (Optional) If you want a different monitoring duration, click Customize traffic allocation and duration. You can then specify the rule's Traffic allocation and choose a custom Monitoring duration.
Choose a regression behavior:
- Notify sends you an email and a notification within the LaunchDarkly app if your metric exceeds the threshold you set.
- Notify and rollback both turns the flag targeting Off and sends you an email and a notification within the LaunchDarkly app if your metric exceeds the threshold you set.
Click Next. The "Save changes" dialog appears.
Complete the "Save changes" fields as needed and click Save changes.
The flag now displays a "Monitoring in progress" message.
You can view the attached metrics' performance on the targeting rule on the flag's Targeting tab:
To view more information about metric performance, click View insights. To learn more, read Flag observability charts.
To edit the metrics attached to a flag, click the Metrics button:
Click Edit metrics to add or remove metrics from the flag.
To remove all metrics from a flag, click the Metrics button and select Remove all metrics.
If you want to stop monitoring before the monitoring window is over:
- From the flag's Targeting tab, click Stop monitoring. A "Stop rollout early" dialog appears.
- Choose which Variation to serve to all contexts after you stop monitoring. The field defaults to the control variation.
- Click Stop.
If you are monitoring metrics on the flag, a message displays at the top of the Targeting and Insights tabs:
- Monitoring in progress on [rule]: LaunchDarkly is actively monitoring metrics on the flag rule.
- No regressions found, [rule] enabled fully: the monitoring window is over and LaunchDarkly found no regressions.
- Measured rollout suspended early: the flag change was rolled back before the end of the monitoring window, either manually or by LaunchDarkly.
- Regression mitigated: LaunchDarkly found a regression and automatically rolled back the change. This message may appear during or after the monitoring window has ended.
- Regression detected on the metric: LaunchDarkly found a regression, but you did not enable automatic rollback so you must manually roll back the change. This message may appear during or after the monitoring window has ended.
To roll back a flag change after LaunchDarkly has detected a regression:
From the flag's Targeting tab, find the rule with the detected regression.
Click Roll back rule. The "Stop rollout early" dialog appears.
Choose which Variation to serve to all contexts after you stop monitoring. The field defaults to the control variation.
After you add a metric to a flag, the "Flag observability" section on the flag's Insights tab displays the following information:
- The name of the monitored rule, and if automatic rollback is enabled
- The percentage of a rule's traffic you assigned to be monitored
- The length of the monitoring window
- The number of contexts monitored during the monitoring window
Each metric you add to the flag displays a metric chart with the lower and upper bounds of the metric results for each variation.
Hover over the percentage in the top right corner of each chart to view the confidence interval for the metric. The confidence interval is the range of values you can expect 95% of the contexts that encounter the metric to fall within.
Each variation for which there was a regression is highlighted in red: