• Home
  • Integrations
  • SDKs
  • Guides
  • API docs
    No results for ""
    EXPAND ALL

    EDIT ON GITHUB

    Creating experiments

    Read time: 6 minutes
    Last edited: Apr 13, 2023

    Overview

    This topic explains how to set up and configure an experiment in LaunchDarkly. It introduces the concepts of metrics, explains different metric types, and explains how metrics interact with feature flags to create experiments.

    In LaunchDarkly, the combination of metrics and flags is an experiment. Experiments let you measure the effect of flags on end users by mapping them to the metrics your team cares about. To learn more, read Designing experiments.

    Configuring experiments

    When an end user performs a metric-tracked action in your application after they encounter a feature flag, the experiment logs an event.

    For example, an experiment might show that end users are more likely to convert by clicking "Checkout" when the checkout button is a certain color. The metric you would track is the number of times end users click on the checkout button. You would connect the metric to a flag serving four variations, each of which is a different color for the button.

    Configuring an experiment requires four steps:

    1. Creating a metric,
    2. Building an experiment,
    3. Turning on the feature flag, and
    4. Starting an iteration.

    These steps are explained in detail below.

    Prerequisites

    To use Experimentation, you must have the following prerequisites:

    You must use a supported SDK version

    Most versions of LaunchDarkly SDKs support Experimentation, but if you use older versions of our SDKs, or you used an older version of Experimentation, compare your SDK versions to our list of supported SDKs to confirm that you're using a version that supports our most recent Experimentation offering.

    To find the supported SDK versions, read Prerequisites in About Experimentation.

    Your SDKs must be configured to send events

    All of your SDKs must send events. If you have disabled sending events for testing purposes, you must re-enable it. To learn more about the events SDKs send to LaunchDarkly, read Analytics events.

    The all flags method sends events for some SDKs, but not others. For SDKs that do not send events with the all flags method, you must call the variation method instead. If you call the variation method, you must use the right variation type. To learn more, read Evaluating flags.

    Your network must be allowed to send events

    Your network must be allowed to send events to LaunchDarkly. Ensure that event streaming endpoints, mobile.launchdarkly.com, and events.launchdarkly.com are on your allow list.

    You must be using unique context keys

    Your anonymous contexts must have their own unique context keys to use Experimentation. To learn more, read Anonymous contexts and users.

    You must configure the Relay Proxy to send events

    You do not have to use the Relay Proxy to use Experimentation. However, if you do use the Relay Proxy, you must configure it to send events. To learn more, read Configuring an SDK to use the Relay Proxy.

    Creating metrics

    Metrics measure audience behaviors affected by the flags in your experiments. You can use metrics to track all kinds of things, from how often end users access a URL to how long that URL takes to load a page. If end users load a URL, click an element, or otherwise participate in the behavior the metric tracks, LaunchDarkly sends an event to your experiment.

    You don't need to create a new metric for each new experiment. You can reuse existing metrics in multiple experiments, which allows you to compare how the metric performs with different flags. Similarly, a single experiment can use primary and secondary metrics, which allow you to observe how the variations perform against various measurements.

    Using primary and secondary metrics

    You can designate only one metric as the primary metric in an experiment, but you can attach secondary metrics to your experiments if you want to track the performance of additional measurements. We recommend using no more than ten metrics per experiment.

    The primary metric is sometimes called the "overall evaluation criterion." When you are making decisions about the winning variation in an experiment, you should base your decision making only on your primary metric, because decision making becomes much more complicated when you include multiple metrics in a decision.

    If you are using just one metric, there are two possible outcomes: up or down. If you are using two metrics, there are four possible outcome combinations: up/down, down/up, up/up, or down/down.

    For each metric you add to an experiment, the possible outcomes increase quickly. If you are using three metrics, there are eight different possible outcome combinations. If you are using ten metrics, there are 1,024 possible outcome combinations. For this reason we recommend basing your decision-making on only your primary metric.

    To learn how to create a new metric, read Creating metrics.

    Building experiments

    You can view all of the experiments in your environment on the Experiments list.

    To build an experiment:

    1. Navigate to the Experiments list.
    2. Click Create experiment:
    The "Experiments" list with the "Create experiment" button called out.
    The "Experiments" list with the "Create experiment" button called out.
    1. The "Experiment details" step opens. Enter a Name.
    2. Enter a Hypothesis for your experiment.
    3. Choose a context kind from the Randomization unit menu. To learn more, read Randomization units.
    4. Click Next. The "Select metrics" step opens.
    5. Choose a metric from the Primary metric menu.
    6. (Optional) Choose additional metrics from the Secondary metrics menu. To learn more, read Using primary and secondary metrics.
    7. (Optional) Choose up to five context attributes to slice results by. To learn more, read Slicing results.
    The "Select metrics" step of a new experiment.
    The "Select metrics" step of a new experiment.
    1. Click Next. The "Define variations" step opens.
    2. Choose a boolean, string, or number flag to use in the experiment from the Select flag menu. You cannot run experiments on JSON flags.
    3. Click Next. The "Set audience" step opens.
    4. Choose which targeting rule to run the experiment on.
    • If you want to restrict your experiment audience to only contexts with certain attributes, create a targeting rule on the flag you include in the experiment and run the experiment on that rule.
    • If you don't want to restrict the audience for your experiment, run the experiment on the flag's default rule. If the flag doesn't have any targeting rules, the flag's default rule will be the only option.
    The "Set audience" step with the default rule chosen.
    The "Set audience" step with the default rule chosen.
    1. Enter the percentage of traffic for each variation you want to include in the experiment. You must include at least two variations for the experiment to be valid.
    2. Select which variation you want LaunchDarkly to serve to the remaining population.
    • (Optional) To disable traffic variation reassignment, click Advanced, then check the Prevent variation reassignment when increasing traffic checkbox. We strongly recommend leaving this box unchecked. To learn more, read Understanding variation reassignment.

      The "Set audience" section of a new experiment.
      The "Set audience" section of a new experiment.
    1. Click Finish. You are returned to the experiment's Design tab.

    You can also use the REST API: Create experiment

    After you have created your experiment, the next steps are to toggle on the flag and start an iteration.

    Turning on feature flags

    For an experiment to begin recording data, the flag used in the experiment must be on. To learn how, read Turning flags on and off.

    You can build multiple experiments on the same flag, but you can run only one of those experiments on the flag at a time.

    Starting experiment iterations

    After you create an experiment and toggle on the flag, you can start an experiment iteration in one or more environments.

    To start an experiment iteration:

    1. Navigate to the Experiments list in the environment you want to start an iteration in.
    2. Click on the name of the experiment you want to start an iteration for. The Results tab appears.
    3. Click Start.
    4. Repeat steps 1-3 for each environment you want to start an iteration in.
    An experiment with the "Start" button called out.
    An experiment with the "Start" button called out.

    Experiment iterations allow you to record experiments in individual blocks of time. To ensure accurate experiment results, when you make changes that impact an experiment, LaunchDarkly starts a new iteration of the experiment.

    To learn more about starting and stopping experiment iterations, read Managing experiments.

    Archiving and restoring experiments

    You can archive an experiment if you want to retire it from LaunchDarkly.

    Archived experiments are hidden from the experiment dashboard by default. However, you cannot delete a flag or metric attached to an archived experiment. You also cannot start new iterations for archived experiments.

    To archive an experiment:

    1. Navigate to the Experiments list in the environment you want to archive an experiment in.
    2. Click on the name of the experiment you want to archive. The experiment detail page appears.
    3. Click Archive experiment.
    An experiment with the "Archive experiment" button called out.
    An experiment with the "Archive experiment" button called out.

    To restore an experiment, click Restore experiment from the experiment details page.

    Experiment settings are environment-specific

    When you create an experiment, it appears in every environment in your project. However, the Start button only impacts the experiment in your current environment. If you want to start a new iteration of the experiment in multiple environments, you must click Start in each environment individually. For example, you might run an experiment in Staging to gather data internally before starting an iteration in Production to gather customer data.

    You can also use the REST API: Create iteration