No results for ""
EXPAND ALL
  • Home
  • API docs

AI configs

Read time: 2 minutes
Last edited: Nov 27, 2024
The AI configs product is available for early access

The AI configs product is only available in early access for customers on Foundation and Enterprise plans. To request early access, navigate to AI configs and join the waitlist.

Overview

This category explains how to use LaunchDarkly to manage your AI configs. An AI config is a resource that you create in LaunchDarkly. You can use AI configs to customize, test, and roll out new large language models (LLMs) and prompts within your generative AI applications.

With AI configs, you can:

  • manage your prompts outside of your application code. This means you can update your prompts at runtime. Additionally, teammates with access to LaunchDarkly, but without familiarity with your codebase, can collaborate and iterate on prompt messages.
  • upgrade your app to the newest model version as soon as it's released, then roll out the changes to your customers gradually and safely.
  • add a config for a new model provider, and progressively move your production traffic to the new provider.
  • compare versions and determine which one performs better, based on satisfaction, cost, or other metrics. You can compare different prompts on models from the same provider, or the same prompts on different model providers.

About AI configs

Within each AI config, you define one or more versions, each of which includes a model configuration, and a prompt with one or more messages. You can also define targeting rules, just like you do with feature flags, to make sure that particular prompts and model configurations are served to particular end users of your application.

Then, within your application, you use one of LaunchDarkly's AI SDKs. The SDK determines which prompt and model your application should serve to which contexts. The SDK can also customize the prompt messages based on context attributes and other variables that you provide. This means both the messages and the model evaluation are specific to each end user, at runtime. You can update your prompts, specific to each end user, without redeploying your application.

After you use this customized config in your AI model generation, you can use the SDK to record various metrics, including generation count, tokens, and satisfaction rate. These appear in the LaunchDarkly user interface for each AI config version.

The topics in this category explain how to create AI configs and versions, update targeting rules, monitor related metrics, and incorporate AI configs in your application.

  • Quickstart for AI configs
  • Create AI configs and versions
  • Target with AI configs
  • Monitor AI configs
  • AI configs and information privacy