Adding experiment code

Last updated:

|Edit this page

Once you've created your experiment in PostHog, the next step is to add your code:

Step 1: Fetch the feature flag

In your experiment, each user is randomly assigned to a variant (usually either 'control' or 'test'). To check which variant a user has been assigned to, fetch the experiment feature flag. You can then customize their experience based on the value in the feature flag:

// Ensure flags are loaded before usage.
// You only need to call this on the code the first time a user visits.
// See this doc for more details: https://posthog.com/docs/feature-flags/manual#ensuring-flags-are-loaded-before-usage
posthog.onFeatureFlags(function() {
// feature flags should be available at this point
if (posthog.getFeatureFlag('experiment-feature-flag-key') == 'variant-name') {
// do something
}
})
// Otherwise, you can just do:
if (posthog.getFeatureFlag('experiment-feature-flag-key') == 'variant-name') {
// do something
}
// You can also test your code by overriding the feature flag:
// e.g., posthog.featureFlags.override({'experiment-feature-flag-key': 'test'})

Since feature flags are not supported yet in our Java, Rust, and Elixir SDKs, to run an experiment using these SDKs see our docs on how to run experiments without feature flags. This also applies to running experiments using our API.

Step 2 (server-side only): Add the feature flag to your events

This step is not required for events that are submitted via our client-side SDKs (e.g., JavaScript web, iOS, Android, React, React Native).

For our backend SDKs, with the exception of the Go library, this step is not required if the server has local evaluation enabled and the flag in question has no property filters. In these cases, flag information is automatically appended to every event sent to PostHog.

For any server-side events that are also goal metrics for your experiment, you need to include feature flag information when capturing those events. This ensures that the event is attributed to the correct experiment variant (e.g., test or control).

There are two methods to do this:

Include the property $feature/experiment_feature_flag_name: variant_name when capturing events:

client.capture({
distinctId: 'distinct_id',
event: 'event_name_of_your_goal_metric',
properties: {
'$feature/experiment-feature-flag-key': 'variant-name'
},
})

Method 2: Set send_feature_flags to true

The capture() method has an optional argument sendFeatureFlags, which is set to false by default. By setting this to true, feature flag information will automatically be sent with the event.

Note that by doing this, PostHog will make an additional request to fetch feature flag information before capturing the event. So this method is only recommended if you don't mind the extra API call and delay.

Node.js
client.capture({
distinctId: 'distinct_id_of_your_user',
event: 'event_name',
sendFeatureFlags: true,
})

Questions?

  • Jonathan
    10 months ago

    Feature Flags (A/B) -> A/B Testing?

    Is there a way to import our feature flags with variants into A/B testing?

  • Ricardo
    a year ago

    Correct flow

    Hi there! I'm implementing server-side A/B testing using the API, and I have a question. What's the correct flow? This is what I'm doing:

    1. Call /decide to retrieve all the FFs and get the one I need.
    2. Call /capture with the $feature_flag_called event to track the usage of the flag.
    3. Once the user does what is needed to trigger an experiment win, I call again /capture with $feature/key: variant with the assigned variant and a custom event.

    Is this everything or am I missing anything? Thanks in advance

  • Václav
    a year ago

    why twice

    Why is the posthog.getFeatureFlag() called twice in the code snippet?

  • Jonathon
    a year ago

    More details about SSR and hydration

    I'd like a complex tutorial covering.

    1. Fetching a featureFlag at the edge (checking for an existing ph cookie, setting one if missing)
    2. Entering a user into a featureFlag group (i.e control or test) at the edge and setting that in the cookie
    3. Using that data to SSR a page in Next.js (different component per control/test group
    4. Bootstrapping that SSR'd data into the React client-side posthog client, to avoid hydration errors.
    • Ian
      a year ago

      Yup, that is everything you need to do when doing server-side A/B testing with the API.

Was this page useful?

Next article

Testing and launching an experiment

Once you've written your code, it's a good idea to test that each variant behaves as you'd expect. If you find out your implementation had a bug after you've launched the experiment, you lose days of effort as the experiment results can no longer be trusted. The best way to do this is adding an optional override to your release conditions . For example, you can create an override to assign a user to the 'test' variant if their email is your own (or someone in your team). To do this: Go to…

Read next article