No results for ""
EXPAND ALL
  • Home
  • API docs

GIVE DOCS FEEDBACK

Example experiments

Read time: 8 minutes
Last edited: Mar 25, 2024
Experimentation is available for Pro and Enterprise plans

Experimentation is available as an add-on to customers on a Pro or Enterprise plan. To learn more, read about our pricing. To add Experimentation to your plan, contact Sales.

Overview

This guide provides examples of experiments you can build in LaunchDarkly.

Example 1: Notifications opt-in

When a customer downloads your app onto their phone or tablet for the first time, it may ask them to opt-in to notifications from the app. Some apps ask the customer to opt-in right away, and others wait until the customer has interacted with your app for a certain amount of time before asking.

Asking the customer to opt-in right away may feel premature, as the customer might not know yet if they like your app enough to want notifications. However, waiting too long may also result in fewer opt-ins, as the customer may have become comfortable using your app without notifications.

The Apple app store allows apps to ask customers to opt-in only once. This means it's important to time your one opt-in request at a point when the customer is most likely to opt-in. You can use Experimentation to find out when this time is.

Hypothesis

To begin, create a hypothesis for your experiment.

Here is an example:

"We believe that by prompting a customer to opt-in to notifications on the fifth time they open our app, customers will opt-in to notifications significantly more often than those who are prompted the first time they open our app, and slightly more often than those who are prompted on the tenth time they open our app."

Sample size

Next, determine the sample size needed for your experiment. In this example, you know that about 2,000 people download your app per day, and you'd like to have at least 40,000 customers in your experiment. You decide to include 100% of your new customers in the sample to get the fastest results. So, you will run this experiment for 21 days to include about 42,000 customers.

Here is the sample size calculation:

2,000 customers x 21 days x 100% of customers = 42,000 customers in experiment

Metric

You will use a click conversion metric to track when customers opt-in to notifications. To learn how to create a click conversion metric, read Click conversion metrics.

Here is what your metric will look like:

The "Metric information" panel for a click conversion metric.
The "Metric information" panel for a click conversion metric.

Variations

The string flag you use in your experiment will have three variations:

  • Prompt on 1st open, which acts as the control
  • Prompt on 5th open
  • Prompt on 10th open

Here is what your flag's Variations tab will look like:

A flag's "Variations" tab with three variations.
A flag's "Variations" tab with three variations.

Audience

You want to include only customers who have newly downloaded the app. To accomplish this, you can build a rule on your flag to exclude customers that have already made an opt-in decision. To learn how, read Targeting rules.

Here is what your flag's Targeting tab will look like with a targeting rule:

A flag's "Targeting" tab including a targeting rule.
A flag's "Targeting" tab including a targeting rule.
You must run experiments on the correct rule

When you build an experiment, you can choose which flag targeting rule to run the experiment on. This ensures you include the right group of customers in your experiment. In this example, run the experiment on the flag’s targeting rule that you just created, not the flag’s default rule.

Experiment design

Finally, combine your hypothesis, metric, flag, and audience into an experiment. To learn how, read Creating experiments.

Here is what your finished experiment will look like in LaunchDarkly:

A complete opt-in experiment.
A complete opt-in experiment.

Example 2: Third-party library assessment

Adding JavaScript libraries to your front end has both benefits and drawbacks. Each library adds latency to your site load time, but can also enable functionality with benefits to your organization such as analytics, personalization, ad revenue, audience assessment, surveys, and more.

To find the balance between site load time and total revenue, you can use experimentation to measure your site revenue with different third-party JavaScript libraries installed.

Hypothesis

To begin, create a hypothesis for your experiment.

Here is an example:

"We believe that installing third-party libraries X and Y on two different versions of our site will yield total revenue increases for both versions, compared to the control with neither library X nor library Y installed."

Sample size

Next, determine the sample size needed for your experiment. In this example, you do not want to include your entire user base in your experiment, so decide to limit your experiment to 10% of your customers. You know that about 60,000 people visit your site per day, and you'd like to have at least 120,000 customers in your experiment. So, you will run this experiment for 20 days.

Here is the sample size calculation:

60,000 customers x 20 days x 10% of customers = 120,000 customers in experiment

Metric

You will use a custom numeric metric to track total revenue. To learn how to create a custom numeric metric, read Custom numeric metrics.

Here is what your metric will look like:

The "Create metric" dialog for a custom numeric metric.
The "Create metric" dialog for a custom numeric metric.

When you create the metric, enter the appropriate event key from your codebase. In this example, the event key is "Total revenue."

Metric keys and event keys are different

LaunchDarkly automatically generates a metric key when you create a metric. You can use the metric key to identify the metric in API calls. To learn more, read Creating metrics.

Custom conversion/binary and custom numeric metrics also require an event key. You can set the event key to anything you want. Adding this event key to your codebase lets your SDK track actions customers take in your app as events. To learn more, read Sending custom events.

Although you will limit your decision-making based on the results of your primary metric, you are also curious about the latency time each third-party library adds. You will add a secondary custom numeric metric with a lower than baseline success criteria to measure time to first byte (TTFB) for each variation.

Here is what your secondary metric will look like:

The "Create metric" dialog for a secondary custom numeric metric.
The "Create metric" dialog for a secondary custom numeric metric.


When you create the metric, enter the appropriate event key from your codebase. In this example, the event key is "Time to first byte."

Variations

The string flag you use in your experiment will have three variations:

  • No third-party library installed, which acts as the control
  • Library X installed
  • Library Y installed

Here is what your flag's Variations tab will look like:

A flag's "Variations" tab with three variations.

Audience

In this experiment, you want a random sample of your entire user base in the experiment, so you will not create any targeting rules for the flag.

Experiment design

Finally, combine your hypothesis, metrics, flag, and audience into an experiment. To learn how, read Creating experiments.

Here is what your finished experiment will look like in LaunchDarkly:

A complete third-party library experiment.
A complete third-party library experiment.

Example 3: Trial account conversions

You offer 14-day trial accounts to potential customers. When the 14-day trial ends, customers have the option to convert to a paid account. You chose the 14-day trial arbitrarily, and would like to find out if giving customers a longer trial period results in more conversions to paid accounts.

Hypothesis

To begin, create a hypothesis for your experiment.

Here is an example:

"We believe that giving customers an extra week in their free trial, for a total of 21 days, will increase conversions to a paid account compared to the control of 14 days."

Sample size

Next, determine the sample size needed for your experiment. In this example, you know that about 750 customers sign up for a free trial per day, and you'd like to have at least 10,000 customers in your experiment. So, you will run this experiment for 14 days to include about 10,500 customers.

Here is the sample size calculation:

750 customers x 14 days x 100% of customers = 10,500 customers in experiment

Metric

You will use a custom conversion/binary metric to track conversions to paid accounts. To learn how to create a custom conversion/binary metric, read Custom conversion/binary metrics.

Here is what your metric will look like:

The "Metric information" panel for a custom conversion/binary metric.
The "Metric information" panel for a custom conversion/binary metric.


When you create the metric, enter the appropriate event key from your codebase. In this example, the event key is "Conversion to paid accounts."

Variations

The number flag you use in your experiment will have two variations:

  • A 14-day trial, which acts as the control
  • A 21-day trial

Here is what your flag's Variations tab will look like:

A flag's "Variations" tab with two variations.
A flag's "Variations" tab with two variations.

Audience

In this experiment, you do not want to include students who may be using your trial service for school projects, so you will exclude customers with email addresses that end with .edu. To accomplish this, you can build a rule on your flag. To learn how, read Targeting rules.

Here is what your flag's Targeting tab will look like with a targeting rule:

A flag's "Targeting" tab including a targeting rule.
A flag's "Targeting" tab including a targeting rule.
You must run experiments on the correct rule

When you build an experiment, you can choose which flag targeting rule to run the experiment on. This ensures you include the right group of customers in your experiment. In this example, run the experiment on the flag’s targeting rule that you just created, not the flag’s default rule.

Experiment design

Finally, combine your hypothesis, metric, flag, and audience into an experiment. To learn how, read Creating experiments.

Here is what your finished experiment will look like in LaunchDarkly:

A complete trial period length experiment.
A complete trial period length experiment.

Conclusion

In this guide, we present three use cases for LaunchDarkly Experimentation that cover app notification optimization, measuring the value of third-party libraries, and testing trial account conversions. For more examples of what you can measure with metrics, read Choosing a metric type.

We hope this guide gets you started on the path to creating your own experiments. To learn more about our Experimentation feature, read Experimentation.

Want to know more? Start a trial.

Your 14-day trial begins as soon as you sign up. Learn to use LaunchDarkly with the app's built-in tutorial. You'll discover how easy it is to manage the whole feature lifecycle from concept to launch to control.

Want to try it out? Start a trial.