• HOME
  • INTEGRATIONS
  • SDKS
  • GUIDES
  • API DOCS
No results for ""
EXPAND ALL
CLOSE
launchdarkly.com

EDIT ON GITHUB

The LaunchDarkly Relay Proxy

Read time: 4 minutes
Last edited: Jun 23, 2020

Overview

This topic explains what the LaunchDarkly Relay Proxy is and how to use it.

The LaunchDarkly Relay Proxy is a microservice that connects to the LaunchDarkly streaming API and proxies that connection to multiple clients.

The Relay Proxy lets a number of servers connect to a local stream instead of making a large number of outbound connections to LaunchDarkly's streaming service (stream.launchdarkly.com). You can configure the Relay Proxy to carry multiple environment streams from multiple projects.

The Relay Proxy is an open-source project supported by LaunchDarkly. The full source is in a GitHub repository. There's also a Docker image on Docker Hub.

Should I use the Relay Proxy?

We recommend that all customers on LaunchDarkly's Enterprise plan use the Relay Proxy.

If you're on LaunchDarkly's Starter or Professional plan, you can use the Relay Proxy if you wish, but the associated operating costs with onboarding the Relay Proxy may be prohibitive.

To learn more, read Best practices for using the Relay Proxy.

Another deciding factor is which of LaunchDarkly's client-side and server-side SDKs you use. The Relay Proxy provides performance and resilience improvements for all server-side SDKs and for SDKs configured to poll.

To learn more, read Understanding the different types of SDKs.

The Relay Proxy works best with the default SDK configurations for all server-side SDKs and for the client-side JavaScript SDK. It does not scale well when it has to maintain streaming connections with a large number of client-side SDKs, including mobile SDKs. If you're utilizing LaunchDarkly's streaming architecture in a heavily-used client-side or mobile application, connecting directly to LaunchDarkly's main service may give you the best performance.

The Relay Proxy was developed to address specific scenarios, and it works best when you use it for those purposes.

Those scenarios are:

You use PHP

PHP's shared-nothing architecture prevents LaunchDarkly from re-using the streaming API connection across requests.

You can use PHP without the Relay Proxy, but we strongly recommend using the proxy in daemon mode if you are using PHP in a high-throughput setting. This makes the Relay Proxy receive feature flag updates.

To learn more, read Using the proxy in different modes.

You want to reduce outbound connections to LaunchDarkly

A large number of servers, such as thousands or tens of thousands, can present too many outbound persistent connections to LaunchDarkly's streaming API for a proxy or firewall to realistically handle.

Use the Relay Proxy in proxy mode so your servers can connect directly to hosts in your own datacenter, instead of connecting directly to LaunchDarkly's streaming API.

On an appropriately configured host, each Relay Proxy can handle tens of thousands of concurrent connections. This dramatically reduces the number of outbound connections to the LaunchDarkly streaming API.

You want to reduce redundant database traffic

If you use a persistent feature store and you have a large number of servers connected to LaunchDarkly, each server attempts to update the data store when a flag update occurs. This behavior is safe, but inefficient.

To learn more, read Using a persistent feature store.

Deploy the Relay Proxy in daemon mode and set your LaunchDarkly SDKs to daemon mode. By doing this, you can delegate flag updates to a small number of Relay Proxy instances and reduce the number of redundant update calls to your data store.

Best practices for using the Relay Proxy

This section includes some guidelines for positioning and using the LaunchDarkly Relay Proxy successfully. These guidelines are not exhaustive or required. The most effective practices for your organization may be different based on your configuration and deployment requirements.

To learn more about performance expectations once the Relay Proxy is running, read Monitoring the Relay Proxy.

Scaling guidelines

If you want to size or scale your Relay Proxy, the most important thing to consider is the amount of dedicated network bandwidth available to it. The Relay Proxy is not CPU or memory intensive, so these are unlikely to be performance bottlenecks. However, the Relay Proxy does require a significant amount of network bandwidth, because it makes many small requests, very frequently.

We have tested and developed the Relay Proxy to work with an AWS m4.xlarge instance, but you can use the Relay Proxy with any technical equivalent. The m4.xlarge instances we test against have 4 vCPUs and 16GiB of memory, but that is not a hard requirement. In fact, the Relay Proxy may use significantly less memory and CPU than the m4.xlarge instance offers. More importantly, the m4.xlarge instance has sufficient networking performance that the Relay Proxy should perform well.

To learn more about instance sizing, read Amazon's documentation on EC2 instance types.

The Relay Proxy works best with LaunchDarkly's server-side SDKs and with SDKs configured for polling. It does not handle streaming connections from client-side SDKs efficiently. When you use LaunchDarkly's streaming architecture in a heavily-used client-side or mobile application, be sure to monitor and scale the Relay Proxy accordingly.

To learn more, read Monitoring the Relay Proxy.

Architectural guidelines

If you choose to use the LaunchDarkly Relay Proxy, position it effectively within your network architecture. Your application must be able to access the Relay Proxy for it to work, and that architecture varies based on the type of app you have.

For example, do not put the Relay Proxy inside a firewall if you intend to connect it to any client-side apps.

If you have deployed your application to multiple regions, consider running one or more Relay Proxy instances in each of those regions in close proximity to your application. This limits latency between your application and the Relay Proxy.

Caching guidelines

We do not recommend relying solely on Relay Proxy's in-memory caching in a production environment. Instead, we recommend that you cache flag data in a persistent feature store.

To learn more, read Using a persistent feature store and Persistent storage.

Whether or not you utilize a persistent feature store impacts how the Relay Proxy handles inbound feature flag requests on initialization, before it establishes a connection to LaunchDarkly.

If you use a persistent feature store, Relay Proxy uses previously known-good values as stored in the persistent feature store. Without a persistent feature store, however, Relay Proxy doesn't know anything about any feature flags yet. SDKs use default flag values until the Relay Proxy establishes a connection to LaunchDarkly.