No results for ""
EXPAND ALL
  • Home
  • API docs

Node.js (AI) SDK reference

Read time: 6 minutes
Last edited: Nov 25, 2024
The AI configs product is available for early access

The AI configs product is only available in early access for customers on Foundation and Enterprise plans. To request early access, navigate to AI configs and join the waitlist.


The AI SDKs are designed for use with the AI configs product. The Node.js (AI) SDK is currently in an alpha version.

Overview

This topic documents how to get started with the Node.js (AI) SDK, and links to reference information on all of the supported features. You can use either JavaScript or TypeScript when working with the Node.js (AI) SDK.

SDK quick links

LaunchDarkly's SDKs are open source. In addition to this reference guide, we provide source, API reference documentation, and sample applications:

ResourceLocation
SDK API documentationSDK API docs
GitHub repositorynode-server-sdk-ai
Sample applicationUsing Bedrock, Using OpenAI
Published modulenpm
For use in server-side applications only

This SDK is intended for use in multi-user Node.js server applications. To learn more about LaunchDarkly's different SDK types, read Client-side, server-side, and edge SDKs.

Get started

LaunchDarkly AI SDKs interact with AI configs. AI configs are the LaunchDarkly resources that manage your AI prompts and model configurations.

You can use the Node.js (AI) SDK to customize your AI config based on the context that you provide. This means both the prompt and the model evaluation in your generative AI application are specific to each end user, at runtime. You can also use the AI SDKs to record metrics from your AI model generation, including duration and tokens.

Follow these instructions to start using the Node.js (AI) SDK in your application.

Install the SDK

First, install the AI SDK as a dependency in your application using your application's dependency manager. If you want to depend on a specific version, refer to the SDK releases page to identify the latest version. The Node.js (AI) SDK is built on the Node.js (server-side) SDK, so you'll need to install that as well.

Here's how:

npm install @launchdarkly/node-server-sdk
npm install @launchdarkly/server-sdk-ai

Next, import init, LDContext, and initAi in your application code. If you are using TypeScript, you can optionally import the LaunchDarkly LDAIClient and LDAIConfig. These are implied, so are not strictly required.

Here's how:

import { init, LDContext } from '@launchdarkly/node-server-sdk';
import { initAi, LDAIClient, LDAIConfig } from '@launchdarkly/server-sdk-ai';

Initialize the client

After you install and import the SDK, create a single, shared instance of LDClient. When the LDClient is initialized, use it to initialize the LDAIClient. The LDAIClient is how you interact with AI configs. Specify the SDK key to authorize your application to connect to a particular environment within LaunchDarkly.

The Node.js SDKs use an SDK key

The Node.js AI and server-side SDKs use an SDK key. Keys are specific to each project and environment. They are available from the Environments list for each project. To learn more about key types, read Keys.

Here's how:

const ldClient: LDClient = init('sdk-key-123abc');
try {
await ldClient.waitForInitialization({ timeout: 10 });
// initialization complete
} catch (error) {
// timeout or SDK failed to initialize
}
const aiClient: LDAIClient = initAi(ldClient);

Configure the context

Next, configure the context that will use the AI config, that is, the context that will encounter generated AI content in your application. The context attributes determine which version of the AI config LaunchDarkly serves to the end user, based on the targeting rules in your AI config. If you are using template variables in the prompts in your AI config's versions, the context attributes also fill in values for the template variables.

Here's how:

const context: LDContext = {
kind: 'user',
key: 'user-key-123abc',
firstName: 'Sandy',
lastName: 'Smith',
email: 'sandy@example.com',
groups: ['Google', 'Microsoft'],
};

Customize an AI config

Then, use config to customize the AI config. This function returns the customized prompt and model. Customization means that any variables you include in the prompt when you define the AI config version have their values set to the context attributes and variables you pass to config. Then, you can pass the customized prompt directly to your AI.

The customization process within the AI SDK is similar to evaluating flags in one of LaunchDarkly's client-side, server-side, or edge SDKs, in that the SDK completes the customization without a separate network call.

Here's how:

const fallbackConfig = {
model: {
id: 'my-default-model',
parameters: { name: 'My default model'}
},
messages: [ { role: 'system', content: '' } ],
provider: { id: 'my-default-provider' },
enabled: true,
};
const aiConfig: LDAIConfig = aiClient.config(
'ai-config-key-123abc',
context,
fallbackConfig,
{ 'exampleCustomVariable': 'exampleCustomValue' },
);

To learn more, read Customizing AI configs.

Record metrics from AI model generation

Finally, use one of the track[Model]Metrics functions to record metrics from your AI model generation.

Here's how:

const { tracker } = aiConfig;
const completion = await tracker.trackOpenAIMetrics(
// Pass in the result of the OpenAI operation.
// When you call the OpenAI operation, use details from aiConfig.
// For instance, you can pass aiConfig.messages
// and aiConfig.model to your specific OpenAI operation.
//
// For a complete example, visit https://github.com/launchdarkly/js-core/tree/main/packages/sdk/server-ai/examples/openai.
);

Alternatively, you can use the SDK's other track* functions to record these metrics manually. You may need to do this if you are using a model for which the SDK does not provide a convenience track[Model] function. The track[Model] functions are expecting a response, so you may also need to do this if your application requires streaming.

To learn more, read Tracking AI metrics.

Supported features

This SDK supports the following features:

  • Anonymous contexts
  • Context configuration
  • Customizing AI configs
  • Private attributes
  • Tracking AI metrics