How to set up external caching for local evaluation in Node.js

Note: This feature is experimental and may change in minor versions.

When using local evaluation, the Node SDK periodically fetches feature flag definitions from PostHog and stores them in memory. This works well for single-instance applications, but in multi-worker or edge environments, you may want to:

  • Share flag definitions across workers to reduce API calls
  • Coordinate fetching so only one worker fetches at a time
  • Pre-cache definitions for ultra-low-latency flag evaluation

The FlagDefinitionCacheProvider interface lets you implement custom caching using any storage backend (Redis, database, Cloudflare KV, etc.).

When to use external caching

ScenarioRecommendation
Single server instanceBuilt-in caching is sufficient
Multiple workers (same process)Built-in caching is sufficient
Multiple servers/containersUse Redis or database caching with distributed locks
Edge workers (Cloudflare, Vercel Edge)Use KV storage with split read/write pattern

Installation

Import the interface from the experimental module:

typescript
import { FlagDefinitionCacheProvider, FlagDefinitionCacheData } from 'posthog-node/experimental'

The interface

To create a custom cache, implement the FlagDefinitionCacheProvider interface:

typescript
interface FlagDefinitionCacheProvider {
// Retrieve cached flag definitions
getFlagDefinitions(): Promise<FlagDefinitionCacheData | undefined> | FlagDefinitionCacheData | undefined
// Determine if this instance should fetch new definitions
shouldFetchFlagDefinitions(): Promise<boolean> | boolean
// Store definitions after a successful fetch
onFlagDefinitionsReceived(data: FlagDefinitionCacheData): Promise<void> | void
// Clean up resources on shutdown
shutdown(): Promise<void> | void
}

The FlagDefinitionCacheData type contains everything needed for local evaluation:

typescript
interface FlagDefinitionCacheData {
flags: PostHogFeatureFlag[] // Feature flag definitions
groupTypeMapping: Record<string, string> // Group type index to name mapping
cohorts: Record<string, PropertyGroup> // Cohort definitions for local evaluation
}

Method details

MethodPurposeReturn value
getFlagDefinitions()Retrieve cached definitions. Called when the poller refreshes.Cached data, or undefined if cache is empty
shouldFetchFlagDefinitions()Decide if this instance should fetch. Use for distributed coordination (e.g., locks).true to fetch, false to skip
onFlagDefinitionsReceived(data)Store definitions after a successful API fetch.void
shutdown()Release locks, close connections, clean up resources.void

Note: All methods may throw errors. The SDK catches and logs them gracefully, ensuring cache provider errors never break flag evaluation.

Using your cache provider

Pass your cache provider when initializing PostHog:

typescript
import { PostHog } from 'posthog-node'
const cache = new YourCacheProvider()
const posthog = new PostHog('<ph_project_api_key>', {
personalApiKey: '<ph_personal_api_key>',
enableLocalEvaluation: true,
flagDefinitionCacheProvider: cache,
})

Common patterns

Shared caches with locking

When running multiple server instances with a shared cache like Redis, coordinate fetching so only one instance polls PostHog at a time.

The recommended pattern:

  • One instance owns the lock for its entire lifetime, not just during a single fetch
  • Refresh the lock TTL each polling cycle to maintain ownership
  • Release on shutdown, but only if you own the lock
  • Let locks expire if a process crashes, so another instance can take over

Caches without locking

Some storage backends like Cloudflare KV don't support atomic locking operations. In these cases, use a split read/write pattern:

  1. A scheduled job (cron) periodically fetches flag definitions and writes to the cache
  2. Request handlers read from the cache and evaluate flags locally, with no API calls

This separates the concerns entirely. One process writes, all others read.

Cloudflare Workers example

A complete working example is available in the posthog-js repository. It uses the split read/write pattern described above. The worker's scheduled job writes flag definitions to KV, and request handlers read from it.

This pattern is ideal for high-traffic edge applications where flag evaluation must be extremely fast and you can tolerate flag updates being slightly delayed.

Community questions

Questions about this page? or post a community question.