• Home
  • Integrations
  • SDKs
  • Guides
  • API docs
No results for ""


Creating feature change experiments

Read time: 3 minutes
Last edited: Oct 30, 2023


This topic explains how to set up and configure a feature change experiment in LaunchDarkly.

Configuring experiments

Configuring a feature change experiment requires four steps:

  1. Creating a metric,
  2. Building an experiment,
  3. Turning on the feature flag, and
  4. Starting an iteration.

These steps are explained in detail below.

Creating metrics

Metrics measure audience behaviors affected by the flags in your experiments. You can use metrics to track all kinds of things, from how often end users access a URL to how long that URL takes to load a page. You can reuse metrics in multiple experiments, or create new ones for your feature change experiment. To learn how to create a new metric, read Metrics.

Building experiments

You can view all of the experiments in your environment on the Experiments list.

Before you build an experiment for the first time, you should read about and understand randomization units, primary and secondary metrics, and attribute filters.

To build an experiment:

  1. Navigate to the Experiments list.
  2. Click Create experiment.
  3. The "Experiment details" step opens. Enter a Name.
  4. Enter a Hypothesis.
  5. Select the Feature change experiment type.
  6. Click Next. The "Choose randomization unit and attributes" step opens.
  7. Choose a context kind from the Randomization unit menu.
  8. (Optional) Choose up to five context attributes to slice results by.
  9. Click Next. The "Select metrics" step opens.
  10. Choose a Primary metric, or click Create a new metric.
  11. (Optional) Add any Secondary metrics.
  12. Click Next. The "Choose flag variations" step opens.
  13. Choose a flag to use in the experiment from the Flag menu.
  14. Click Next. The "Set audience" step opens.
  15. Choose which targeting rule to run the experiment on.
  • If you want to restrict your experiment audience to only contexts with certain attributes, create a targeting rule on the flag you include in the experiment and run the experiment on that rule.
  • If you don't want to restrict the audience for your experiment, run the experiment on the flag's default rule. If the flag doesn't have any targeting rules, the flag's default rule will be the only option.
The "Set audience" step with the default rule chosen.
The "Set audience" step with the default rule chosen.
  1. Enter the percentage of traffic for each variation you want to include in the experiment. You must include at least two variations for the experiment to be valid.
  2. Select which variation you want LaunchDarkly to serve to the remaining population.
  • (Optional) Advanced: We strongly recommend leaving the Advanced options on their default settings. To learn more, read Understanding variation reassignment.

    The "Set audience" section of a new experiment.
    The "Set audience" section of a new experiment.
  1. Click Finish. You are returned to the experiment's Design tab.

After you have created your experiment, the next steps are to toggle on the flag and start an iteration.

You can also use the REST API: Create experiment

Turning on feature flags

For an experiment to begin recording data, the flag used in the experiment must be on. To learn how, read Turning flags on and off.

You can build multiple experiments on the same flag, but you can run only one of those experiments on the flag at a time.

Starting experiment iterations

After you create an experiment and toggle on the flag, you can start an experiment iteration in one or more environments.

To start an experiment iteration:

  1. Navigate to the Experiments list in the environment you want to start an iteration in.
  2. Click on the name of the experiment you want to start an iteration for. The Results tab appears.
  3. Click Start.
  4. Repeat steps 1-3 for each environment you want to start an iteration in.
An experiment with the "Start" button called out.
An experiment with the "Start" button called out.

Experiment iterations allow you to record experiments in individual blocks of time. To ensure accurate experiment results, when you make changes that impact an experiment, LaunchDarkly starts a new iteration of the experiment.

To learn more about starting and stopping experiment iterations, read Managing experiments.

You can also use the REST API: Create iteration