LaunchDarkly Developer Documentation

Get started in under 30 minutes.
LaunchDarkly provides feature flags as a service for Java · Python · Ruby · Go · Node.js · PHP · .NET. Control feature launches -- who sees what and when -- without multiple code deploys. Easy dashboard for phased rollouts, targeting and segmenting.
Need more help? write us at

Get Started    Documentation

Running A/B tests

A/B Testing with Multivariate Flags

We are currently working on support for A/B testing with multivariate flags. For now, if you would like to do A/B testing, you must use a boolean feature flag.

Setting up LaunchDarkly to run A/B tests (aka experiments) involves a few additional setup steps beyond what's needed to control feature flags. We'll walk through the steps one by one. We'll assume that you've already gone through the basic Setup steps for setting up a feature flag and can successfully toggle features on and off. Please note that you'll need to define what percentage of users will receive each of the two variations for your feature flag. You may wish to split your flag evaluations 90/10 so that only 10% of users will receive the experimental variation; you can set it to 50/50 if you want to see results faster.

Include the client-side SDK

If you want to run client-side A/B tests (click and pageview goals), you must set up the client-side SDK. See our JavaScript SDK Reference for details.

Alternatively, if your web application is not written in JavaScript, you may use custom events to emulate tracking client-side goals.

Create goals

Goals are the metrics used to measure the effectiveness of a feature. To run an A/B test, you need to define the goals you care about. LaunchDarkly supports three kinds of goals:

  • Click goals - track whether a user clicks on a specific page element.
  • Page view goals - track whether a user lands on a specific page (for example, a confirmation page).
  • Custom goals - track other user interactions that don't correspond to page views or clicks.

Once you've created your goals, you'll need to decide which goals to track for each feature.

Track goals

Goals are defined per project in LaunchDarkly and can be re-used for multiple feature flags. In order to indicate which goals are relevant to each flag, you need to associate them with the feature flag. You can do this on the feature flag's Experiments tab by clicking "Manage Goals".

Associating a goal with a feature flag

Associating a goal with a feature flag

When you're ready to start tracking your goals, the following snippet can be implemented into your application:

ldClient.track("your-goal-key", user);
ldclient.get().track("your-goal-key", user)
ld_client.track("your-goal-key", user)
ldClient.Track("your-goal-key", user)
ld_client.track("your-goal-key", user);
$client->track("your-goal-key", user);
ldClient.Track("your-goal-key", user);

Custom event data

Optionally, you can also attach custom JSON data to your event by passing an extra parameter to your track call. See our SDK reference guides for more information

View results

Once your goals have been set up (and you've verified that you're receiving events for your goals on the dev console, you'll begin to see results in the Experiments tab.

A/B testing results on the Experiments tab

A/B testing results on the Experiments tab

Showing results

We'll only show results once each variation of the feature flag has received at least one impression.

Event processing time

We process events on a five minute delay. If you're not seeing the numbers you expect on the analytics tab, ensure that you've waited at least five minutes for your events to be processed.

Once we've determined a winner, we'll show a green checkbox next to the winning variation. We'll select a winner when each variation has at least 1000 distinct users (impressions), and the confidence interval is above 95%.

More about custom goals and events

Custom goals are useful for recording interactions that don't directly correspond to clicks or page views. LaunchDarkly allows you to record custom events client-side (via our JavaScript SDK) or on the server (via our server-side SDKs).

Running A/B tests

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.