No results for ""
EXPAND ALL
  • Home
  • API docs

LaunchDarkly vocabulary

Read time: 22 minutes
Last edited: Dec 19, 2024

Overview

This topic defines common words used in the LaunchDarkly application and documentation. While many of these words may be familiar to you, in some cases they have nuances specific to LaunchDarkly.

The following definitions may be useful as you work in LaunchDarkly:

A

AI config

An AI config is a LaunchDarkly resource that you create when your application uses artificial intelligence (AI) model generation. It manages the model configurations and messages you use in your application. When you use a LaunchDarkly AI SDK to customize an AI config, the SDK determines which message and model your application should serve to which contexts. The SDK also customizes the message based on context attributes that you provide.

To learn more, read AI configs and AI SDKs.

Analysis method

A metric's analysis method is the mathematical technique by which you want to analyze its results. You can analyze results by mean, median, or percentile.

To learn more, read Analysis method.

Application

An application is a LaunchDarkly resource that describes what you are delivering to a customer. LaunchDarkly automatically creates applications when it establishes a connection with a LaunchDarkly SDK that contains application information. After an application is created, you can build flag targeting rules based on application name, version, or other properties, such as whether or not a particular application version is supported.

To learn more, read Applications and application versions.

Attribute

An attribute is a field in a context that you can use in targeting rules for flags and segments. Each context that encounters a feature flag in your product can have a different value for a given attribute.

To learn more, read Context attributes.

Audience

An experiment's audience is the combination of the targeting rule you're experimenting on and the number of contexts you allocate to each flag variation.

To learn more, read Allocating experiment audiences.

B

Bayesian statistics

In LaunchDarkly Experimentation, Bayesian statistics is a results analysis option good for experiments with small sample sizes. The other analysis option is frequentist statistics.

To learn more, read Bayesian versus frequentist statistics.

C

Cohort

A cohort can refer to a group of contexts in Amplitude synced with a LaunchDarkly segment, or to the targeting rules on a migration flag.

To learn more about Amplitude cohorts, read Syncing segments with Amplitude cohorts.

Migration flag cohorts are analogous to the targeting rules for other types of feature flags. The default cohort is analogous to the default rule.

To learn more, read Targeting with migration flags.

Confidence interval

In frequentist statistics, the confidence interval is the range of metric values that contains a chosen percentage of events if you were to repeat an experiment many times. For example, a 95% confidence interval is the range that contains the values for 95% of the metric events you would receive.

To learn more, read Historical results for frequentist experiments.

Context

A context is a generalized way of referring to the people, services, machines, or other resources that encounter feature flags in your product.

To learn more, read Contexts.

Context instance

A context instance is a unique combination of one or more contexts that have encountered a feature flag in your product.

To learn more, read Context instances.

Context kind

A context kind organizes your contexts into different types based on the kinds of resources encountering flags. Each context has one kind with a unique set of corresponding attributes that you can use for targeting and Experimentation.

Some customers are billed by contexts. This billing method uses Monthly active users (MAU).

To learn more, read Context kinds.

Control variation

An experiment's control variation is the flag variation that you are comparing the treatment variation to. The comparison determines if the treatment has a negative or positive effect on the metric you're measuring.

To learn more, read Experimentation.

Conversion rate

In Bayesian results, the conversion rate column displays:

  • for count conversion metrics: the total number of times a context triggered a conversion event
  • for binary conversion metrics: the percentage of contexts that triggered at least one conversion event

To learn more, read Bayesian experiment results and Frequentist experiment results.

Conversions

In experiment results, the conversions column displays the total number of times a context triggered a conversion event measured by a conversion metric.

To learn more, read Bayesian experiment results and Frequentist experiment results.

Credible interval

In Bayesian statistics, the credible interval is the range of metric values that you can be confident a chosen percentage of your results will fall into. For example, a 90% credible interval is the range that contains 90% of the values in the metric's posterior distribution.

To learn more, read Historical results for Bayesian experiments.

D

Default rule

A default rule describes the feature flag variation to serve to the contexts that don't match any of the individual targets or rules you have previously specified for the flag. It is sometimes called the "fallthrough" rule because all of the rules preceding it have been evaluated, and the context encountering the flag has "fallen through" to this last rule. To learn more, read Set the default rule.

The default rule only applies when the flag is toggled on. If the flag is toggled off, then LaunchDarkly will serve the "default off variation" for the flag. In the LaunchDarkly user interface (UI), the default off variation is specified in the field labeled "If targeting is off, serve." To learn more, read The off variation.

E

Environment

An environment is an organizational unit contained within a project. You can create multiple environments within each project. Environments in LaunchDarkly typically correspond to the environments in which your code is deployed, such as development, staging, and production. All environments in a single project contain the same flags. However, the flags can have different states, targets, and rules in each environment.

To learn more, read Environments.

Evaluation

An evaluation is what happens when your application's code sends the LaunchDarkly SDK information about a particular flag and a particular context that has encountered that flag, and the SDK sends back the value of the flag variation that the context should receive. We say that the SDK evaluates the flag, or that the flag has been evaluated for a particular context or customer.

Try it in your SDK: Evaluating flags

Event

An event refers to data that LaunchDarkly SDKs send to LaunchDarkly when a user or other context takes an action in your app. Server-side, client-side, and edge SDKs send analytics events to LaunchDarkly as a result of feature flag evaluations and certain SDK calls.

To learn more, read Analytics events and Metric events.

Event key

An event key is a unique identifier you set for a particular kind of event within your app. Metrics use event keys to identify events for performance tracking. Events are environment-specific.

To learn more, read Metric events.

Expected loss

The expected loss for a variation within an experiment is the risk, expressed as a percentage, that the variation will not actually be an improvement over the control variation due to the margin of error in metric results. Expected loss displays only for metrics that use an "Average" analysis method.

To learn more, read Bayesian experiment results and Frequentist experiment results.

Experiment

An experiment is a LaunchDarkly feature that connects a flag with one or more metrics to measure end-user behavior. Experiments track how different variations of a flag affect end-user interactions with your app, and determine the winning variation. You can run an experiment for one or more iterations.

To learn more, read Experimentation.

Experiment flag

An experiment flag is a temporary flag that has an experiment running on one of its targeting rules.

To learn more, read Flag templates.

Experimentation key

An experimentation key is a unique context key from a from server-side, client-side, or edge SDK, that is included in each experiment:

  • if the same context key is in one experiment multiple times, LaunchDarkly counts it as one Experimentation key
  • if the same context key is in two different experiments, LaunchDarkly counts it as two Experimentation keys

Some customers are billed by Experimentation keys. To learn more, read Experimentation keys.

Exposures

In experiment results, the exposures column displays the total number of contexts measured by the metric.

To learn more, read Bayesian experiment results and Frequentist experiment results.

F

Fallback value

The fallback value is the value your application should use for a feature flag or AI config in error situations.

Specifically, for the client-side, server-side, and edge SDKs, the fallback value is the flag variation that LaunchDarkly serves in the following two situations:

  • If your application cannot connect to LaunchDarkly.
  • If your application can connect to LaunchDarkly, but the flag is toggled off and you have not specified a default off variation. To learn more, read The off variation.

For the AI SDKs, the fallback value is the value of the variation of your AI config to use if the AI config is not found or if any errors occur during processing.

Regardless of how you configure variations or targeting rules, each time you evaluate a flag or customize an AI config from the LaunchDarkly SDK, you must include a fallback value as one of the parameters.

Fallthrough rule

A fallthrough rule is a synonym for default rule.

Feature change experiment

A feature change experiment lets you measure the effect different flag variations have on a metric.

To learn more, read Experiment types.

Flag

A flag is the basic unit of feature management. It describes the different variations of a feature and the rules that allow different entities to access them. Different entities that access your features could be a percentage of your application's traffic, individuals, or people or software entities who share common characteristics like location, email domain, or type of mobile device. The entities that encounter feature flags in your product are called contexts.

To learn more, read Using feature management.

Frequentist statistics

In LaunchDarkly Experimentation, frequentist statistics is a results analysis option good for experiments with larger sample sizes. The other analysis option is Bayesian statistics.

Funnel metric group

A funnel metric group is reusable, ordered list of metrics you can use with funnel optimization experiments to measure end user progression through a number of steps, typically from the awareness stage to the purchasing stage of your marketing funnel.

To learn more, read Metric groups.

Funnel optimization experiment

A funnel optimization experiment uses multiple metrics within a funnel metric group to track the performance of each of the steps in a marketing funnel over time.

To learn more, read Experiment types.

G

Guarded release/guarded rollout

A guarded release or guarded rollout is a type of flag rollout in which LaunchDarkly gradually increases the percentage of contexts that are receiving a particular flag variation, while monitoring for regressions. To perform this monitoring, you must attach one or more metrics to your flag. You can configure LaunchDarkly to notify you or automatically roll back a release when it detects a regression.

To learn more, read Guarded rollouts and Releasing features with LaunchDarkly.

H

Holdout

A holdout is a group of contexts that you have temporarily excluded from all or a selected set of your experiments. Holdouts allow you to measure the effectiveness of your Experimentation program.

To learn more, read Holdouts.

I

Iteration

An iteration is a defined time period that you run an experiment for. An iteration can be any length that you choose, and you can run multiple iterations of the same experiment.

To learn more, read Managing experiments.

K

Kill switch

A kill switch is a permanent flag used to shut off tools or functionality in the case of an emergency.

To learn more, read Flag templates.

L

Layer

A layer is a set of experiments that cannot share traffic with each other. All of the experiments within a layer are mutually exclusive, which means that if a context is included in one experiment, LaunchDarkly will exclude it from any other experiments in the same layer.

To learn more, read Mutually exclusive experiments.

M

Mean

In frequentist experiment results, the mean is a flag variation's average numeric value that you should expect in an experiment, based on the data collected so far. Only numeric metrics measure the posterior mean.

To learn more, read Probability report.

Member

A member or account member is a person who uses LaunchDarkly at your organization. These people work at your organization or have access rights to your organization's LaunchDarkly environment for another reason, such as contractors or part-time employees.

To learn more, read Account members.

Metric

LaunchDarkly uses different kinds of metrics to do things like measure flag change impact, gauge application performance, track account usage, and more.

The five kinds of metrics within LaunchDarkly include:

  • Flag impact metrics: these metrics allow you to measure specific end-user behaviors as part of an experiment or guarded rollout. Metrics can measure things like links clicked, money spent, or response time. When combined with a flag in an experiment, metrics determine which flag variation is the winning variation. Metrics send metric events to LaunchDarkly. To learn more, read Metrics.
  • Engineering insights project metrics: these metrics track engineering team efficiency and performance. To learn more, read Project metrics.
  • Migration flag metrics: these metrics track the progress of a migration flag. To learn more, read Migration flag metrics.
  • Application adoption metrics: these metrics track the adoption percentage for an application version. To learn more, read Adoption metrics.
  • Account metrics: these metrics help you understand your client-side monthly active users (MAU) usage, Experimentation key usage, Data Export usage, and server usage for billing purposes. To learn more, read Account usage metrics.

Migration flag

A migration flag is a temporary flag used to migrate data or systems while keeping your application available and disruption free. Migration flags break up the switch from an old to a new implementation into a series of recommended stages where movement from one stage to the next is done in incremental steps.

To learn more, read Flag templates.

Monthly active users (MAU)

MAU is a billing metric that measures the number of user contexts your flags encounter from client-side SDKs over a particular month. MAU includes user contexts that are both single contexts and those that are part of a multi-context. These user contexts appear on the Contexts list, and expire from the list after 30 days of inactivity.

To learn more, read Account usage metrics.

Mutually exclusive experiment

A mutually exclusive experiment is an experiment configured to prevent its contexts from being included in other experiments. Experiments are mutually exclusive from each other when they are contained within the same layer.

To learn more, read Mutually exclusive experiments.

O

P

P-value

In frequentist experiment results, a treatment variation's probability value, or p-value, is the likelihood that the observed difference from the control variation is due to random chance. A p-value of less than or equal to 0.05 is statistically significant. The lower the p-value, the less likely the relative difference is due to chance alone.

Percentage rollout

A percentage rollout is a rollout option for a targeting rule that serves a given flag variation to a specified percentage of contexts that encounter the flag. A common use case for percentage rollouts is to manually increment the percentage of customers targeted by a flag over time until 100% of the customers receive one variation of a flag.

To learn more, read Percentage rollouts and Releasing features with LaunchDarkly.

Posterior distribution

In Bayesian statistics, the posterior distribution is the expected range of outcomes for a treatment variation based on prior beliefs about your data gathered from the control variation.

To learn more, read Bayesian versus frequentist statistics.

Posterior mean

In Bayesian experiment results, the posterior mean is a flag variation's average numeric value that you should expect in an experiment, based on the data collected so far. Only numeric metrics measure the posterior mean.

To learn more, read Bayesian experiment results.

Prerequisite

You can make flags depend on other flags being enabled to take effect. A prerequisite flag is one on which a second flag depends. When the second flag is evaluated, the prerequisite flag must be on, and the target must be receiving the variation of the prerequisite flag that you specify. If the prerequisite flag is toggled off, the target will receive the default off variation of the dependent flag.

To learn more, read Flag prerequisites.

Primary context kind

The primary context kind is the context kind with the highest volume of monthly activity. For most customers, the primary context kind is user.

For billing purposes, LaunchDarkly only charges for contexts from the primary context kind, called MAU. LaunchDarkly calculates this as the context kind with the largest number of unique contexts that evaluate, initialize, or identify any flag from a client-side SDK over a given calendar month.

To learn more about context kinds, read Context kinds. To learn more about billing by contexts, read Calculating billing.

Probability to be best

In Bayesian experiment results, the variation with the highest probability to be best is the variation that had the largest positive impact on the metric you're measuring. Probability to be best displays only for metrics that use an "Average" analysis method.

To learn more, read Bayesian experiment results.

Probability to beat control

In experiment results, a treatment variation's probability to beat control is the likelihood that the variation is better than the control variation, and can be considered the winning variation.

To learn more, read Bayesian experiment results.

Probability to beat control

In experiment results, a treatment variation's probability to beat control is the likelihood that the variation is better than the control variation, and can be considered the winning variation.

Progressive rollout

A progressive rollout is a type of flag rollout in which LaunchDarkly gradually increases the percentage of contexts that are receiving a particular flag variation. You can specify the duration and how the percentage increases. A common use case for progressive rollouts is to automatically increment the percentage of customers targeted by a flag over time until 100% of the customers receive one variation of a flag.

To learn more, read Progressive rollouts and Releasing features with LaunchDarkly.

Project

A project is an organizational unit for flags in your LaunchDarkly account. You can define projects in any way you like. A common pattern is to create one project in your LaunchDarkly account for each product your company makes. Each project can have multiple environments.

To learn more, read Projects.

R

Randomization unit

An experiment's randomization unit is the context kind that the experiment uses to randomly sort contexts into each variation by, according to the experiment's traffic allocation.

To learn more, read Randomization units.

Regression

A regression is when LaunchDarkly detects a negative effect on your application performance as a result of a flag change or rollout. You can use guarded rollouts to notify you or automatically roll back a release when it detects a regression.

To learn more, read Guarded rollouts.

Relative difference from control

In experiment results, the relative difference from control column displays the difference between variation's conversion rate and the control's conversion rate.

To learn more, read Bayesian experiment results and Frequentist experiment results.

Release flag

A release flag is a temporary flag that initially serves "Unavailable" (false) to most or all of its targets, then gradually rolls out the "Available" (true) variation until it reaches 100%.

To learn more, read Flag templates.

Release pipeline

A release pipeline lets you move flags through a series of phases, rolling out flags to selected environments and audiences following automated steps. You can use release pipelines to view the status of ongoing releases across all flags within a project, enforcing a standardized process and ensuring they are following best practices.

To learn more, read Release pipelines.

Role, custom role

A role is a description of the access that a member or team has within LaunchDarkly. Every LaunchDarkly account has four built-in roles: Reader, Writer, Admin, and Owner.

A custom role is also a description of the access that a member or team has within LaunchDarkly. With custom roles, you create the description of the access using a set of statements called a policy.

Every member must have at least one role or custom role assigned to them, either directly or through a team. This is true even if the role explicitly prohibits them from accessing any information within LaunchDarkly.

To learn more, read Built-in roles, Custom roles, and Custom role concepts.

Rule

A rule or targeting rule is a description of which contexts should be included for a given outcome. In flags, targeting rules determine which flag variations your application should serve to which contexts. In segments, targeting rules determine which contexts are part of the segment. In AI configs, targeting rules determine which variations your application should serve to which contexts.

Targeting rules can have one or more conditions. Each condition has three parts:

  • A context kind and attribute, which defines the scope of the condition's impact, such as only targeting an email address for the selected context kind.
  • An operator, which sets differentiating characteristics of the attribute, such as limiting the condition to emails that end with certain extensions.
  • A value, which identifies the attribute by a value you specify, such as .edu.

To learn more, read Target with flags, Segment targeting for rule-based and smaller list-based segments, Segment targeting for larger list-based segments, and Target with AI configs.

S

SDK

The LaunchDarkly SDK is the software development kit that you use to integrate LaunchDarkly with your application's code.

We provide more than two dozen LaunchDarkly SDKs, in different languages and frameworks. Our client-side SDKs are designed for single-user desktop, mobile, and embedded applications. They are intended to be used in a potentially less secure environment, such as a personal computer or mobile device. Our server-side SDKs are designed for multi-user systems. They are intended to be used in a trusted environment, such as inside a corporate network or on a web server.

When your application starts, your code should initialize the LaunchDarkly SDK you're working with. When a customer encounters a feature flag in your application, your code should use the SDK to evaluate the feature flag and retrieve the appropriate flag variation for that customer.

To learn more, read Setting up an SDK and Client-side, server-side, and edge SDKs. For more information about the differences between the LaunchDarkly SDK and the LaunchDarkly REST API, read Comparing LaunchDarkly's SDKs and REST API.

Segment

A segment is a list of contexts that you can use to manage flag targeting behavior in bulk. Segments are useful for keeping groups of contexts, like beta-users or enterprise-customers, up to date. They are environment-specific.

LaunchDarkly supports:

  • rule-based segments, which let you target groups of contexts individually or by attributes,
  • list-based segments, which let you target individual contexts or uploaded lists of contexts, and
  • synced segments, which let you target groups of contexts backed by an external data store.

To learn more, read Segments.

Segment is also the name of a third-party software application that collects and integrates customer data across tools. LaunchDarkly integrates with Segment in the following ways:

  • You can use Segment as a destination for LaunchDarkly's Data Export feature. To learn more, read Segment.
  • You can use Segment as a source for metric events. To learn more, read Segment for metrics.
  • Segment Audiences is one of several tools you can use to create synced segments. To learn more, read Segments synced from external tools.

Standard metric group

A standard metric group is reusable set of metrics you can use with feature change experiments to standardize metrics across multiple experiments.

To learn more, read Metric groups.

T

Target

To target (verb) is to specify that specific contexts that encounter feature flags or AI configs in your application should receive a specific variation of that resource. A target (noun) is an individual context or a set of contexts described by a targeting rule.

To learn more, read Target with flags, Segment targeting for rule-based and smaller list-based segments, Segment targeting for larger list-based segments, and Target with AI configs.

Team

A team is a group of members in your LaunchDarkly account. To learn more, read Teams.

Total value

In experiment results, the total value column displays the sum total of all the numbers returned by a numeric metric.

To learn more, read Bayesian experiment results and Frequentist experiment results.

Traffic allocation

An experiment's traffic allocation is the amount of contexts you assign to each flag variation you're experimenting on.

To learn more, read Allocating experiment audiences.

Treatment variation

In an experiment, the treatment variation is the flag variation that you are comparing against the control variation, to determine if the treatment has a negative or positive effect on the metric you're measuring.

To learn more, read Experimentation.

U

Unit aggregation method

The unit aggregation method for a metric is the mathematical method you want to aggregate event values by for the metric's results. You can aggregate either by sum or by average.

To learn more, read Unit aggregation method.

User

Previously, a user was the only way to refer to an entity that encountered feature flags in your product.

Newer versions of the LaunchDarkly SDKs replace users with contexts. Contexts are a more powerful and flexible way of referring to the people, services, machines, or other resources that encounter feature flags in your product. A user is just one kind of context.

People who are logged in to the LaunchDarkly user interface are called members.

V

Variation

A variation is a description of a possible value that a flag can have. Each variation must contain the possible flag value. It may also contain a name and description.

Flags share variations across environments within a project. However, the flags can have different states, targets, and rules in each environment.

When you create a flag, you must decide whether it is a boolean flag or a multivariate flag. Boolean flags have exactly two variations, with values of "true" and "false." Multivariate flags can have more than two variations. Each of the variations must have a value of the same type, for example, a string.

To learn more, read Creating flag variations.

In migration flags, variations are built-in and cannot be edited because they are linked to the migration's stages.

To learn more, read Targeting with migration flags.

W

Winning variation

An experiment's winning variation is the variation that performed the best out of all the variations tested.

For Bayesian experiments, every experiment iteration displays each variation's probability to beat control. The variation with the highest probability to beat control in tandem with probability to be best is the winning variation.

For frequentist experiments, every experiment iteration displays each variation's p-value. The variation with the highlighted p-value is the winning variation.

To learn more, read Analyzing experiments.