LaunchDarkly Developer Documentation

Get started in under 30 minutes.
LaunchDarkly provides feature flags as a service for Java · Python · Ruby · Go · Node.js · PHP · .NET. Control feature launches -- who sees what and when -- without multiple code deploys. Easy dashboard for phased rollouts, targeting and segmenting.
Need more help? write us at support@launchdarkly.com

Get Started    Documentation

The LaunchDarkly Relay Proxy

Overview

This topic explains what the LaunchDarkly Relay Proxy is and how to use it.

The LaunchDarkly Relay Proxy is a microservice that connects to the LaunchDarkly streaming API and proxies that connection to multiple clients.

The Relay Proxy lets a number of servers connect to a local stream instead of making a large number of outbound connections to LaunchDarkly's streaming service (stream.launchdarkly.com). You can configure the relay proxy to carry multiple environment streams from multiple projects.

The Relay Proxy is an open-source project supported by LaunchDarkly. The full source is in a GitHub repository. There's also a Docker image on Docker Hub.

Should I use the Relay Proxy?

We recommend using the Relay Proxy if you anticipate or already have many outbound connections from your app to LaunchDarkly. This can occur when your app uses many LaunchDarkly SDKs heavily or has many LaunchDarkly clients, such as when your app consists of many nodes or microservices.

We developed the Relay Proxy to address specific scenarios. If your requirements meet any of the following criteria, use the Relay Proxy to improve performance and reliability.

You use PHP

PHP's shared-nothing architecture prevents LaunchDarkly from re-using the streaming API connection across requests.

You can use PHP without the Relay Proxy, but we strongly recommend using the proxy in daemon mode if you are using PHP in a high-throughput setting. This makes the Relay Proxy receive feature flag updates.

To learn more, read Using the proxy in different modes.

You want to reduce outbound connections to LaunchDarkly

A large number of servers, such as thousands or tens of thousands, can present too many outbound persistent connections to LaunchDarkly's streaming API for a proxy or firewall to realistically handle.

Use the Relay Proxy in proxy mode so your servers can connect directly to hosts in your own datacenter, instead of connecting directly to LaunchDarkly's streaming API.

On an appropriately configured host, each Relay Proxy can handle tens of thousands of concurrent connections. This dramatically reduces the number of outbound connections to the LaunchDarkly streaming API.

You want to reduce redundant update traffic to Redis

If you use Redis as a shared persistence option for feature flags and you have a large number of servers connected to LaunchDarkly, each server attempts to update Redis when a flag update occurs. This behavior is safe, but inefficient.

Deploy the Relay Proxy in daemon mode and set your LaunchDarkly SDKs to daemon mode. By doing this, you can delegate flag updates to a small number of Relay Proxy instances and reduce the number of redundant update calls to Redis.

Best practices for using the Relay Proxy

This section includes some guidelines for positioning and using the LaunchDarkly Relay Proxy successfully. These guidelines are not exhaustive or required. The most effective practices for your organization may be different based on your configuration and deployment requirements.

To learn more about performance expectations once the Relay Proxy is running, read Monitoring and troubleshooting the Relay Proxy.

Scaling guidelines

If you want to size or scale your Relay Proxy, the most important thing to consider is the amount of dedicated network bandwidth available to it. The amount of available memory is not a serious concern because the Relay Proxy does not demand a high amount of CPU or network processing. However, it does use a high amount of networking load, because it makes many small requests, very frequently.

We have tested and developed the Relay Proxy to work with an AWS m4.xlarge instance, but you can use the Relay Proxy with any technical equivalent. The m4.xlarge instances we test against have 4 vCPUs and 16GiB of memory, but that is not a hard requirement. In fact, the Relay Proxy may use significantly less memory and CPU than the m4.xlarge instance offers. More importantly, the m4.xlarge instance has sufficient networking performance that the Relay Proxy should perform well.

To learn more about instance sizing, read Amazon's documentation on EC2 instance types.

Architectural guidelines

If you choose to use the LaunchDarkly Relay Proxy, position it effectively within your network architecture. Your application must be able to access the Relay Proxy for it to work, and that architecture varies based on the type of app you have.

For example, do not put the Relay Proxy inside a firewall if you intend to connect it to any client-side apps.

Caching guidelines

We do not recommend using in-memory caching for the Relay Proxy in a production environment.

If you use in-memory caching, the cache repopulates on initial startup, because it emptied during the previous shutdown. This can cause performance degradation or lag as the cache repopulates.


The LaunchDarkly Relay Proxy


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.