No results for ""
EXPAND ALL
  • Home
  • API docs

Tracking AI metrics

Read time: 5 minutes
Last edited: Nov 08, 2024
The AI configs product is available for early access

The AI configs product is only available in early access for customers on Foundation and Enterprise plans. To request early access, navigate to AI configs and join the waitlist.


The AI SDKs are designed for use with the AI configs product. The AI SDKs are currently in an alpha version.

Overview

This topic explains how to record metrics from your AI model generation, including duration, satisfaction, and tokens. This feature is available for AI SDKs only.

About AI metrics

To help you track how your AI model generation is performing, the AI SDKs provide options to record metrics from your model generation. LaunchDarkly displays these metrics on the AI config Monitoring tab in the user interface.

The SDK provides separate functions to record metrics for several of the models that you can select when you set up your AI config version in the LaunchDarkly UI. If your model is not directly supported in the SDK, you can use the SDK's other track* functions to record these metrics manually.

AI SDKs

This feature is available for all of the AI SDKs:

  • Node.js (AI)
  • Python (AI)

Node.js (AI)

Expand Node.js (AI) code sample

Use one of the track[Model] functions to record metrics from your AI model generation. The SDK provides separate track[Model] functions for several of the models that you can select when you set up your AI config version in the LaunchDarkly user interface.

Here's how:

const { tracker } = aiConfig;
const completion = await tracker.trackOpenAI(
// Pass in the result of the OpenAI operation.
// When you call the OpenAI operation, use details from aiConfigValue.
// For instance, you can pass aiConfig.config.prompt
// and aiConfig.config.model to your specific OpenAI operation.
//
// For a complete example, visit https://github.com/launchdarkly/js-core/tree/main/packages/sdk/server-ai/examples/openai.
);

You can also use the SDK's other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track[Model] function. The track[Model] functions are expecting a response, so you may also need to do this if your application requires streaming.

Each of the track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the track* functions from across all versions of the AI config.

Here's how to record metrics manually:

// Track your own start and stop time.
// Set duration to the time (in ms) that your AI model generation takes.
// The duration may include network latency, depending on how you calculate it.
aiConfig.tracker.trackDuration(duration);

To learn more, read LDAIConfigTracker.

Python (AI)

Expand Python (AI) code sample

Use one of the track_[model] functions to record metrics from your AI model generation. The SDK provides separate track_[model] functions for several of the models that you can select when you set up your AI config version in the LaunchDarkly user interface.

Here's how:

tracker = config_value.tracker
completion = tracker.track_openai(
# Pass in the result of the OpenAI operation.
# When calling the OpenAI operation, use details from config_value.
# For instance, you can pass config_value.config['model']['modelId']
# and config_value.config['prompt'] to your specific OpenAI operation.
)

You can also use the SDK's other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track_[model] function. The track_[model] functions are expecting a response, so you may also need to do this if your application requires streaming.

Each of the track* functions sends data back to LaunchDarkly. The Monitoring tab of the AI config in the LaunchDarkly UI aggregates data from the track* functions from across all versions of the AI config.

Here's how to record metrics manually:

# Track your own start and stop time.
# Set duration to the time (in ms) that your AI model generation takes.
# The duration may include network latency, depending on how you calculate it.
config_value.tracker.track_duration(duration)

To learn more, read LDAIConfigTracker.