No results for ""
EXPAND ALL
  • Home
  • API docs

GIVE DOCS FEEDBACK

Experimentation

Read time: 5 minutes
Last edited: Mar 26, 2024
Experimentation is available for Pro and Enterprise plans

Experimentation is available as an add-on to customers on a Pro or Enterprise plan. To learn more, read about our pricing. To add Experimentation to your plan, contact Sales.

Overview

This topic explains the concepts and value of LaunchDarkly's Experimentation feature. Experiments let you measure the effect of flags on end users by tracking metrics your team cares about.

Understanding Experimentation

Experimentation lets you validate the impact of features you roll out to your app or infrastructure. You can measure things like page views, clicks, load time, infrastructure costs, and more.

By connecting metrics you create to flags in your LaunchDarkly environment, you can measure the changes in your customer's behavior based on what flags they evaluate. This helps you make more informed decisions, so the features your development team ships align with your business objectives. To learn about different kinds of experiments, read Experiment types. To learn about Experimentation use cases, read Example experiments.

Here is an example of an experiment's Results tab:

An experiment's results tab.
An experiment's results tab.

Prerequisites

To use Experimentation, you must have the following prerequisites:

  • You must be using the listed version number or higher for the following SDKs:

    Click to expand a table listing required client-side SDK versions

    Client-side SDKs:

    SDKVersion
    .NET (client-side)2.0.0
    Android3.1.0
    C++ (client-side)2.4.8
    ElectronAll versions
    Flutter0.2.0
    iOSAll versions
    JavaScript2.6.0
    Node.js (client-side)All versions
    React Native5.0.0
    React WebAll versions
    RokuAll versions
    VueAll versions
    Click to expand a table listing required server-side SDK versions

    Server-side SDKs:

    SDKVersion
    .NET (server-side)6.1.0
    Apex1.1.0
    C/C++ (server-side)2.4.0
    Erlang1.2.0
    Go5.4.0
    Haskell2.2.0
    Java5.5.0
    Lua1.0
    Node.js (server-side)6.1.0
    PHP4.1.0
    Python7.2.0
    Ruby6.2.0
    Rust1.0.0-beta.1
    Click to expand a table listing required edge SDK versions

    Edge SDKs:

    SDKVersion
    Cloudflare2.3.0
    Vercel1.2.0
  • Your SDKs must be configured to send events. If you have disabled sending events for testing purposes, you must re-enable it. The all flags method sends events for some SDKs, but not others. For SDKs that do not send events with the all flags method, you must call the variation method instead. If you call the variation method, you must use the right variation type. To learn more about the events SDKs send to LaunchDarkly, read Analytics events.

  • You must configure your SDK to assign anonymous contexts their own unique context keys.

  • If you use the LaunchDarkly Relay Proxy, it must be at least version 6.3.0 and you must configure it to send events. To learn more, read Configuring an SDK to use the Relay Proxy.

Using Experimentation

We designed Experimentation to be accessible to a variety of roles within your organization. For example, product managers can use experiments to measure the value of the features they ship, designers can test multiple variations of UI and UX components, and DevOps engineers can test the efficacy and performance of their infrastructure changes.

If an experiment tells you a feature has positive impact, you can roll that feature out to your entire user base to maximize its value. Alternatively, if you don't like the results you're getting from a feature, you can toggle the flag's targeting off and minimize its impact on your user base.

Some of the things you can do with Experimentation include:

  • A/B/n testing, also called multivariate testing
  • Start and stop experiments at any time so you have immediate control over which variations your customers encounter
  • Review credible intervals in experiment results so you can decide which variation you can trust to have the most impact
  • Target specific groups of contexts or segments to experiments, refining your testing audience
  • Measure the impact of changes to your product at all layers of your technology stack
  • Roll out product changes in multiple stages, leveraging both Experimentation and workflows

To learn more, read Designing experiments.

Analyzing experiment results

Experiment data is collected on an experiment's Results tab, which displays experiment data in near-real time. To learn more, read Analyzing experiments.

As your experiment collects data, LaunchDarkly calculates the variation that is most likely to be the best choice out of all the variations you're testing. After you decide which flag variation has the impact you want, you can gradually roll that variation out to 100% of your customers with LaunchDarkly's percentage rollouts feature. To learn more about percentage rollouts, read Percentage rollouts.

You can export experiment data to an external destination using Data Export. To learn more, read Data Export.

Experimentation best practices

As you use Experimentation, consider these best practices:

  • Use feature flags on every new feature you develop. This is a best practice, but it especially helps when you're running experiments in LaunchDarkly. By flagging every feature, you can quickly turn any aspect of your product into an experiment.
  • Run experiments on as many feature flags as possible. This creates a culture of experimentation that helps you detect unexpected problems and refine and pressure-test metrics.
  • Consider experiments from day one. Create hypotheses in the planning stage of feature development, so you and your team are ready to run experiments as soon as your feature launches.
  • Define what you're measuring. Align with your team on which tangible metrics you're optimizing for, and what results constitute success.
  • Plan your experiments in relation to each other. If you're running multiple experiments simultaneously, make sure they don't collect similar or conflicting data.
  • Associate end users who interact with your app before and after logging in. If someone accesses your experiment from both a logged out and logged in state, each state will generate its own context key. You can associate multiple related contexts together using multi-contexts. To learn more, read Associating anonymous contexts with logged-in end users.

You can use experiments to measure a variety of different outcomes. Some example experiments include:

  • Testing the efficacy of different search implementations, such as Elasticsearch versus SOLR versus Algolia
  • Tracking how features you ship are increasing or decreasing page load time
  • Calculating conversion rates by monitoring how frequently end users click on various page elements
  • Testing the impact of new artificial intelligence and machine learning (AI/ML) models on end-user behavior and product performance.

You can also use the REST API: Experiments (beta)