How to use evaluation runtimes and environments together for fine-grained flag control
Contents
Evaluation runtimes and evaluation environments are two complementary features that give you precise control over where and when your feature flags evaluate. This guide shows practical examples of using them together effectively.
Prerequisites: First understand evaluation environments and evaluation runtime individually.
How they work together
These features apply as sequential filters:
- Runtime filter first: Excludes flags based on the SDK type (client vs server)
- Environment filter second: Further filters based on environment context
Example filtering flow
Consider a flag with:
- Runtime:
server
- Evaluation environments:
["production", "api"]
Here's what happens in different scenarios:
SDK Type | Environment Values | Result |
---|---|---|
JavaScript Web | ["production", "web"] | ❌ Blocked by runtime (client SDK can't access server flag) |
Node.js | ["staging", "backend"] | ❌ Blocked by environments (neither "staging" nor "backend" match flag's tags) |
Node.js | ["production", "api"] | ✅ Both filters pass |
Python | ["production", "backend"] | ✅ Both filters pass ("production" matches) |
Quick setup
You can configure both features when creating or editing a feature flag in PostHog:
- Set evaluation runtime to
server
,client
, orall
- Add evaluation environments and mark them as constraints (bolt icon ⚡)
Then configure your SDKs with matching evaluation_environments
. See the evaluation environments documentation for SDK configuration examples.
Common use cases
API rate limits that shouldn't be exposed to clients
Scenario: You have rate limiting logic that varies by customer tier, but you don't want to expose these business rules to client-side code where competitors could inspect them.
Configuration:
- Runtime:
server
- Evaluation environments:
["api"]
Why both features? Runtime ensures the flag never reaches browsers where it could be inspected. Environments let you exclude this flag from other server contexts (like background workers) to reduce evaluation costs.
Preventing staging features from affecting production
Scenario: You're testing a new recommendation algorithm in staging, but some services are shared between staging and production environments.
Configuration:
- Runtime:
all
- Evaluation environments:
["staging"]
Why both features? You need the flag in both client and server contexts (runtime: all
), but only in staging. The environment constraint ensures production services never evaluate this flag, even if they share code with staging.
Rolling out mobile features without affecting web
Scenario: You're testing a new native camera feature that only makes sense on mobile apps, and you want to ensure web users never download this flag's code.
Configuration:
- Runtime:
client
- Evaluation environments:
["mobile"]
Why both features? Runtime: client
prevents server-side services from evaluating this UI-specific flag. The mobile
tag ensures web browsers don't download or evaluate it, improving performance.
A/B testing pricing only where it matters
Scenario: You're testing new pricing tiers, but only want to evaluate this in your billing service and checkout UI, not in every service and client.
Configuration:
- Runtime:
all
- Evaluation environments:
["billing"]
Why both features? The pricing affects both frontend (checkout UI) and backend (billing service), so runtime is all
. But you don't want every service and client evaluating this flag thousands of times - only the specific parts that handle billing.
Best practices when combining features
Use runtime for security boundaries
Set runtime to server
for flags containing:
- Sensitive business logic
- Rate limits or quotas
- Infrastructure settings
Layer environments for precise control
Remember that evaluation environments use OR logic: ["staging", "checkout"]
matches ANY staging OR ANY checkout. For AND logic, use compound tags like ["staging-checkout"]
.
Start simple
- Set runtime first (security boundary)
- Add basic evaluation environment tags
- Refine with specific tags as needed
Troubleshooting combined setups
When a flag isn't working as expected, check in this order:
- Runtime filter: Is the SDK type (client/server) allowed?
- Environment filter: Does at least one evaluation tag match?
- SDK config: Is
evaluation_environments
set?
For detailed troubleshooting steps, see the evaluation environments documentation.
Common pitfalls
Forgetting the two-stage filter
Runtime blocks first, then environments. A server-only flag will never reach client SDKs, regardless of evaluation environments.
Missing SDK configuration
Without evaluation_environments
in your SDK, environment filtering won't work:
Summary
- Runtime filters by SDK type (security boundary)
- Environments filter by context (organization)
- They work sequentially: runtime blocks first, then environments filter
For implementation details: