No results for ""
EXPAND ALL
  • Home
  • API docs

AI prompt flags

Read time: 3 minutes
Last edited: Jun 12, 2024
AI prompt flags are available through opt-in

AI prompt flags are available only if you opt in through a feature preview. To enable AI prompt flags, click your member icon in the left sidenav and choose Turn off new experience from the menu. In the Feature preview dialog, toggle the "AI flag templates" on or off.

Overview

This topic explains how you can use AI prompt flags to roll out new large language model (LLM) prompts within your AI application. You can use AI prompt flags to deliver new prompts that help guide the response from AI models in your application. You can introduce new prompts to support specific language model interaction scenarios.

Understanding AI prompt flags

AI prompt flags, sometimes called LLM prompt flags, are JSON flags that allow you configure the different personas within a prompt, for example, "System", "User", or "Assistant". Each flag variation includes different configurations of these items. Typically a single AI prompt flag has multiple variations of a similar prompt, for example based on different versions or audiences that the prompt will apply to.

To create an AI prompt flag:

  1. Navigate to the flags list.
  2. Click Create flag. The "Create new flag" page appears.
  3. Enter a unique, human-readable Name.
  4. (Optional) Update the flag Key. You'll use this key to reference the flag in your code. A suggested key auto-populates from the name you enter, but you can customize it if you wish.
  5. (Optional) Enter a Description of the flag. A brief, human-readable description helps your account members understand what the flag is for.
  6. (Optional) Update the Maintainer for the flag.
  7. (Optional) Check the Include flag in this project's release pipeline box. To learn more, read Release pipelines.
  8. Choose the AI prompt template in the Configuration section:
The "Configuration" section of the "Create new flag" page, with "AI prompt" called out.
The "Configuration" section of the "Create new flag" page, with "AI prompt" called out.
  1. Choose Yes or No to indicate whether this flag is temporary. AI model flags are usually permanent.
  2. Select the JSON flag type.
  3. In the "Variations" section, review each variation and update as needed. Each variation provides a set of specific prompts to the AI used in your application. Update the "system," "user," and "assistant" prompts as needed. "System" and "user" are configured in the sample code.
  4. (Optional) Update the default variations.
  5. Choose one or more tags from the Tags menu.
  6. Check the SDKs using Mobile Key and/or SDKs using client-side ID boxes to indicate which client-side SDKs you will use to evaluate this flag. If you are using a server-side SDK, leave these boxes unchecked.
  7. Click Create flag.

To learn more, read Creating new flags.

Common use cases

In the most common use case, your application uses a LaunchDarkly SDK to evaluate the AI prompt flag, and parses the flag variation returned. Then, your application includes the returned prompt as part of the payload it delivers to a language model provider, such as OpenAI, Anthropic, or Amazon Bedrock, alongside the end user's request. The AI language model receives the prompt and end user content, and uses it to provide data back to the end user.

Applications working with AI prompts typically include a default prompt for all audiences, as well as targeting rules alongside additional variations to roll out specific prompt configurations to specific audiences.

We expect that you will create or update flag variations for your AI prompt flag frequently, as new AI models are released and you want to adjust to obtain the best results. We recommend that you create new variations and test these variations to ensure your end users are receiving predictable, cost effective, and useful results.