No results for ""
EXPAND ALL
  • Home
  • API docs

GIVE DOCS FEEDBACK

Managing experiments

Read time: 6 minutes
Last edited: Feb 02, 2024

Overview

This topic explains how to start recording an experiment iteration and stop the experiment iteration when you're finished.

Understanding experiments as they run

When your experiments are running, you can view information about them on the Experiments list or on the related flag's Experimentation tab. The Experimentation tab displays all the experiments a flag is participating in, including both experiments that are currently recording and experiments that are stopped.

Here are some things you can do with each experiment:

  • Stop the experiment or start a new iteration. To learn how, read Managing experiments.
  • Edit the metrics connected to the experiment and start a new iteration. To learn how, read Starting experiment iterations.
  • View experiment data over set periods of time on the Iterations tab:
An experiment's "Iterations" tab.
An experiment's "Iterations" tab.

When you start an experiment, LaunchDarkly creates a new iteration for that experiment. Each iteration includes the results of the experiment over a period of time for its specific configuration. When you stop an experiment or edit its configuration, including its hypothesis, metrics, variations, or audience, LaunchDarkly ends the iteration. This ensures that your experiment results are valid.

Starting experiment iterations

Experiments do not start collecting data automatically. When you want to begin recording data, you must start a new experiment iteration.

To start a new experiment iteration:

  1. Navigate to the Experiments list.
  2. Click on the name of the experiment you want to start an iteration for. The Design tab appears.
  3. Click Start:
An experiment with the "Start" button called out.
An experiment with the "Start" button called out.

The Start button only starts a new iteration of the experiment in that specific environment. If you wish to run the same experiment in multiple environments, you must press the Start button on each experiment in each environment. This behavior prevents you from being billed for events you don't want to track.

To learn more about creating an experiment, read Creating experiments.

After you start your experiment iteration, you can check on it at any time in the flag's Experimentation tab. Experiment data appears on the Experimentation tab after a few minutes. You may have to refresh the page to fetch new data for your experiment.

You can also use the REST API: Patch experiment

Stopping experiment iterations

If you need to make changes to your experiment, you can stop your experiment iteration.

To stop an experiment iteration:

  1. Navigate to the Experiments list.
  2. Click on the name of the experiment you want to stop. The Design tab appears.
  3. Click Stop. A "Stop this experiment?" dialog appears.
An experiment with the "Stop" button called out.
An experiment with the "Stop" button called out.
  1. Choose a variation to serve to all targets in the experiment.
  2. Enter a reason for stopping the experiment.
  3. Click "Stop experiment."

After you stop an experiment iteration, the variation you chose in step 4 is served to all targets that match the experiment audience targeting rule. The flag's Targeting tab updates to reflect this.

You can stop an experiment iteration at any time. Just like starting an iteration, stopping an iteration only impacts the experiment in one environment. If you wish to stop collecting data from every instance of an experiment, you must stop each experiment in each environment individually.

When you stop recording an experiment, LaunchDarkly ends the iteration and stops collecting data about user behavior for that experiment. The data collected for that iteration is available on the experiment's Results tab.

Stopping an experiment does not delete the experiment. Stopping an experiment lets you retain the results and data the experiment has already collected.

You can also use the REST API: Patch experiment

Editing experiments

You can make changes to the hypothesis, audience, and metrics of an existing experiment.

After you create an experiment, you cannot edit its name, flag, or the flag's variations. If you want to use a different flag or a different flag variation, you must create a new experiment.

Changing an experiment's details

To change an experiment's details:

  1. Navigate to the Design tab of your experiment.
  2. Click Edit experiment.
  3. Navigate to the "Details" section and click the pencil edit icon.
  4. Edit the hypothesis as needed.
  5. Choose a new randomization unit as needed. To learn more, read Randomization units.
  6. Scroll to the top of the page and click Save.
  • If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.

Changing an experiment's metric

If you want to begin measuring a completely different metric as part of an experiment, we recommend creating a new experiment instead of editing an existing one. If you want to use a similar metric, you can change the metric associated with an experiment.

Here's how to change the metric:

  1. Navigate to the Design tab of your experiment.
  2. Click Edit experiment.
  3. Navigate to the "Metrics" section and click the pencil edit icon.
  4. Choose new metrics from the "Primary metric" or "Secondary metrics" menus. Scroll to the top of the page and click Save.
  • If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.

Changing an experiment's audience

To edit the audience of an experiment:

  1. Navigate to the Design tab of your experiment.
  2. Click Edit experiment.
  3. Navigate to the "Experiment audience" section and click the pencil edit icon.
The targeting section of a flag's "Targeting" tab.
The targeting section of a flag's "Targeting" tab.
  1. Edit the traffic allocation as needed.
  • (Optional) To disable variation reassignment, click Advanced, then check the Prevent variation reassignment when increasing traffic checkbox. We strongly recommend leaving this box unchecked. To learn more, read Understanding variation reassignment.
  1. Scroll to the top of the page and click Save.
  • If the experiment was running when you made edits, a "Save experiment design?" dialog appears. Enter a reason for the change and click Save and start new iteration.

Archiving experiments

You can archive experiments that have concluded, as well as the flags and metrics attached to them, but you cannot permanently delete any of them. Archiving experiments preserves the results so you can refer to them in the future.

When you archive an experiment, you cannot start new iterations for it. LaunchDarkly hides archived experiments from the Experiments list by default.

To archive an experiment:

  1. Navigate to the Experiments list in the environment you want to archive an experiment in.
  2. Click on the name of the experiment you want to archive. The experiment detail page appears.
  3. Click Archive experiment.

To view archived experiments on the Experiments list, click on the overflow menu and select "View archived experiments." To switch back to viewing active experiments, click on the overflow menu and select "View active experiments."

The "View archived experiments" option on the Experiments list.
The "View archived experiments" option on the Experiments list.

To restore an experiment, click on an experiment's Overflow menu from the Experiments list and select "Restore experiment. Or, click Restore experiment from the experiment details page.

Experiment settings are environment-specific

Experiments and experiment settings are specific to single environments. If you want to run the same experiment in different environments, you must create and run the experiment in each environment individually.

You can also use the REST API: Patch experiment