Troubleshooting and FAQs

Last updated:

|Edit this page

How do I use an existing feature flag in an experiment?

We generally don't recommend this, since experiment feature flags need to be in a specific format (see below) or otherwise they won't work.

However, if you insist on doing this (for example, you don't want to make code change), you can do this for multiple variant feature flags only by doing the following:

  1. Delete the existing feature flag you'd like to use in the experiment
  2. Create a new experiment and give your feature flag the same key as the feature flag you deleted in step 1.
  3. Name the first variant in your new feature flag 'control'.
Reuse an existing feature flag for an experiment

Note: Deleting a flag is equivalent to disabling it, so it is off for however long it takes you to create the draft experiment. The flag is enabled as soon as you create the experiment (not launched).

How do I run a second experiment using the same feature flag as the first experiment?

This is similar to running an experiment using an existing feature flag. If you want to re-run an experiment (using the same feature flag key) while preserving the previous experiment results, delete the existing feature flag (not the experiment) and use the same key in the new experiment.

How can I run experiments with my custom feature flag setup?

See our docs on how to run an experiment without using feature flags.

How do I assign a specific person to the control/test variant in an experiment?

Once you create the experiment, go to the feature flag, scroll down to "Release Conditions". For each condition, there is an "Optional Override". This enables you to choose a release condition and force all people in this release condition to have the variant chosen in the optional override.

My Feature Flag Called events show None or false instead of my variant names

The Feature Flag Response property is false for users who called your feature flag but did not match any of the rollout conditions.

None indicates that feature flag is disabled or failed to load. For example, due to network error or something unexpected.

Why are my A/B test event numbers lower than when I create an insight directly?

Experiment results only count events that include the experiment's feature flag data. Sometimes, when you capture experiment events, the flags are not loaded yet. This means users don't see the experiment, their events won't have the flag data, and they are not included in the results calculation.

By default, insights count all the events, whether they include flag data or not. This is why they show a higher number. To confirm this, break down an insight by your experiment's flag and check the number of events with the value None.

A situation where this happens is using pageviews as your goal metric. Because pageviews are captured as soon as PostHog loads, the flag data may not have loaded yet, especially for first time users where flags aren't cached. Thus, the pageview count in insights might be higher than in your experiment.

To fix this, you make sure flags are immediately available on a page load. There are two options to do this:

  1. Wait for feature flags to load before showing the page (low engineering effort, but slows page down by ~200ms).
  2. Use client-side bootstrapping (high engineering effort, but keeps the page blazing fast).

Questions?

Was this page useful?

Next article

Tutorials and guides

Got a question which isn't answered below? Head to the community forum to let us know! How to run experiments on new users How to do A/A testing How to do A/B/n testing How to do holdout testing How to do redirect testing How to run a fake door test Framework guides How to run A/B tests in Android How to run A/B tests in Angular How to run A/B tests in Astro How to run A/B tests in Bubble How to run A/B tests in Django How to run A/B tests in Flutter How to run A/B tests…

Read next article