• Home
  • Integrations
  • SDKs
  • Guides
  • API docs
    No results for ""


    Analyzing experiments

    Read time: 2 minutes
    Last edited: May 12, 2023


    This topic explains how to interpret an experiment's results and apply its findings to your product.

    Understanding experiments as they run

    When your experiments are running, you can view information about them on the Experiments list or on the related flag's Experimentation tab. The Experimentation tab displays all the experiments a flag is participating in, including both experiments that are currently recording and experiments that are stopped.

    Here are some things you can do with each experiment:

    • Stop the experiment or start a new iteration. To learn more, read Managing experiments.
    • Edit the metrics connected to the experiment and start a new iteration.
    • View experiment data over set periods of time on the Iterations tab:
    An experiment's "Iterations" tab.
    An experiment's "Iterations" tab.

    Reading experiment data

    The data an experiment has collected is represented in a results tab.

    The results tab includes information about attribute filters, experiment traffic, sample ratios, and a graph representing the results:

    An experiment's results tab.
    An experiment's results tab.

    To learn more about interpreting an experiment's results, read Reading experiment results.

    Determining how long to run an experiment

    You may not always know how long to run an experiment for. To help decide, you should consider:

    • The current probability of the winning variation being the best and how long it would take to improve your confidence, and
    • the level of risk involved in rolling out the winning variation to all contexts.

    Experiments with two variations display a sample size estimator that gives an estimate of how much more traffic needs to encounter your experiment before reaching your chosen probability to be best. Experiments with more than two variations do not display a sample size estimator.

    In this example, for a 90% probability of being best, 164 more user contexts should be in the experiment before you stop the iteration and roll out the winning variation to all contexts:

    An experiment's sample size estimator results.
    An experiment's sample size estimator results.

    To be confident that the winning variation is the best out of the variations tested, wait until the sample size estimator indicates you have reached the needed number of contexts. Alternatively, if there is a low level of risk in rolling out the winning variation early, or if you don't anticipate a significant impact on your user base, you can end the experiment before you reach that number.

    Choosing a winning variation

    The winning variation for a completed experiment is the variation that is most likely to be the best option out of all of the variations tested. To learn more, read Winning variations.

    Consider stopping an experiment after you choose a winning variation

    If you're done with an experiment and have rolled out the winning variation to your user base, it is a good time to stop your experiment. Experiments running on a user base that only receives one flag variation do not return useful results. Stopping an experiment retains all the data collected so far.

    Further analyzing results

    If you're using Data Export, you can find experiment data in your Data Export destinations to further analyze it using third-party tools of your own.

    To learn more, read Data Export.