Managing experiments
Read time: 6 minutes
Last edited: Sep 08, 2023
Overview
This topic explains how to start recording an experiment iteration and stop the experiment iteration when you're finished.
Understanding experiments as they run
When your experiments are running, you can view information about them on the Experiments list or on the related flag's Experimentation tab. The Experimentation tab displays all the experiments a flag is participating in, including both experiments that are currently recording and experiments that are stopped.
Here are some things you can do with each experiment:
- Stop the experiment or start a new iteration. To learn how, read Managing experiments.
- Edit the metrics connected to the experiment and start a new iteration. To learn how, read Starting experiment iterations.
- View experiment data over set periods of time on the Iterations tab:

When you start an experiment, LaunchDarkly creates a new iteration for that experiment. Each iteration includes the results of the experiment over a period of time for its specific configuration. When you stop an experiment or edit its configuration, including its hypothesis, metrics, variations, or audience, LaunchDarkly ends the iteration. This ensures that your experiment results are valid.
Starting experiment iterations
Experiments do not start collecting data automatically. When you want to begin recording data, you must start a new experiment iteration.
To start a new experiment iteration:
- Navigate to the Experiments list.
- Click on the name of the experiment you want to start an iteration for. The Design tab appears.
- Click Start:

The Start button only starts a new iteration of the experiment in that specific environment. If you wish to run the same experiment in multiple environments, you must press the Start button on each experiment in each environment. This behavior prevents you from being billed for events you don't want to track.
To learn more about creating an experiment, read Creating experiments.
After you start your experiment iteration, you can check on it at any time in the flag's Experimentation tab. Experiment data appears on the Experimentation tab after a few minutes. You may have to refresh the page to fetch new data for your experiment.
You can also use the REST API: Patch experiment
Stopping experiment iterations
If you have discovered a winning variation or need to make changes to your experiment, you can stop your experiment iteration.
To stop an experiment iteration:
- Navigate to the Experiments list.
- Click on the name of the experiment you want to stop. The Design tab appears.
- Click Stop. A "Stop this experiment?" dialog appears.

- Choose a variation to serve to all targets in the experiment.
- Enter a reason for stopping the experiment.
- Click "Stop experiment."
After you stop an experiment iteration, the variation you chose in step 4 is served to all targets that match the experiment audience targeting rule. The flag's Targeting tab updates to reflect this.
You can stop an experiment iteration at any time. Just like starting an iteration, stopping an iteration only impacts the experiment in one environment. If you wish to stop collecting data from every instance of an experiment, you must stop each experiment in each environment individually.
When you stop recording an experiment, LaunchDarkly ends the iteration and stops collecting data about user behavior for that experiment. The data collected for that iteration is available on the experiment's Results tab.
Stopping an experiment does not delete the experiment. Stopping an experiment lets you retain the results and data the experiment has already collected.
You can also use the REST API: Patch experiment
Editing experiments
You can make changes to the hypothesis, audience, and metrics of an existing experiment.
After you create an experiment, you cannot edit its name, flag, or the flag's variations. If you want to use a different flag or a different flag variation, you must create a new experiment.
Changing an experiment's details
To change an experiment's details:
- Navigate to the Design tab of your experiment.
- Click Edit experiment:

- Navigate to the "Details" section and click the pencil edit icon.
- Edit the hypothesis as needed.
- Choose a new randomization unit as needed. To learn more, read Randomization units.
- Scroll to the top of the page and click Save.
- If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.
Changing an experiment's metric
If you want to begin measuring a completely different metric as part of an experiment, we recommend creating a new experiment instead of editing an existing one. If you want to use a similar metric, you can change the metric associated with an experiment.
Here's how to change the metric:
- Navigate to the Design tab of your experiment.
- Click Edit experiment.
- Navigate to the "Metrics" section and click the pencil edit icon.
- Choose new metrics from the "Primary metric" or "Secondary metrics" menus. Scroll to the top of the page and click Save.
- If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.
Changing an experiment's audience
To edit the audience of an experiment:
- Navigate to the Design tab of your experiment.
- Click Edit experiment.
- Navigate to the "Experiment audience" section and click the pencil edit icon.

- Edit the traffic allocation as needed.
- (Optional) To disable variation reassignment, click Advanced, then check the Prevent variation reassignment when increasing traffic checkbox. We strongly recommend leaving this box unchecked. To learn more, read Understanding variation reassignment.
- Scroll to the top of the page and click Save.
- If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.
Legacy experiments
Customers on older contracts may have experiments they created under an old data model. These experiments are listed on the Experiments list as "Legacy experiments."
You cannot edit or start new iterations for legacy experiments, but you can view their results by clicking on the name of the legacy experiment from the list.
You can also use the REST API: Get legacy experiment results
Winning variations in legacy experiments
After your legacy experiment has registered enough events to achieve statistical significance compared to the baseline flag variation, LaunchDarkly declares a winning variation. A winning variation appears when that variation reaches 95% statistical significance.
You can view the statistical significance of your metrics and the winning versions in the "Statistical significance" column. The baseline variation does not show statistical significance.
Your winning variation is the flag variation that has the most positive impact compared to the baseline. You can roll the winning variation out to 100% of your end users from the flag's Targeting tab. To learn how, read Percentage rollouts.
To learn how to determine the winning variation in the current Experimentation model instead, read Winning variations.