How to set up external caching for local evaluation in Node.js
Contents
Note: This feature is experimental and may change in minor versions.
When using local evaluation, the Node SDK periodically fetches feature flag definitions from PostHog and stores them in memory. This works well for single-instance applications, but in multi-worker or edge environments, you may want to:
- Share flag definitions across workers to reduce API calls
- Coordinate fetching so only one worker fetches at a time
- Pre-cache definitions for ultra-low-latency flag evaluation
The FlagDefinitionCacheProvider interface lets you implement custom caching using any storage backend (Redis, database, Cloudflare KV, etc.).
When to use external caching
| Scenario | Recommendation |
|---|---|
| Single server instance | Built-in caching is sufficient |
| Multiple workers (same process) | Built-in caching is sufficient |
| Multiple servers/containers | Use Redis or database caching with distributed locks |
| Edge workers (Cloudflare, Vercel Edge) | Use KV storage with split read/write pattern |
Installation
Import the interface from the experimental module:
The interface
To create a custom cache, implement the FlagDefinitionCacheProvider interface:
The FlagDefinitionCacheData type contains everything needed for local evaluation:
Method details
| Method | Purpose | Return value |
|---|---|---|
getFlagDefinitions() | Retrieve cached definitions. Called when the poller refreshes. | Cached data, or undefined if cache is empty |
shouldFetchFlagDefinitions() | Decide if this instance should fetch. Use for distributed coordination (e.g., locks). | true to fetch, false to skip |
onFlagDefinitionsReceived(data) | Store definitions after a successful API fetch. | void |
shutdown() | Release locks, close connections, clean up resources. | void |
Note: All methods may throw errors. The SDK catches and logs them gracefully, ensuring cache provider errors never break flag evaluation.
Using your cache provider
Pass your cache provider when initializing PostHog:
Common patterns
Shared caches with locking
When running multiple server instances with a shared cache like Redis, coordinate fetching so only one instance polls PostHog at a time.
The recommended pattern:
- One instance owns the lock for its entire lifetime, not just during a single fetch
- Refresh the lock TTL each polling cycle to maintain ownership
- Release on shutdown, but only if you own the lock
- Let locks expire if a process crashes, so another instance can take over
Caches without locking
Some storage backends like Cloudflare KV don't support atomic locking operations. In these cases, use a split read/write pattern:
- A scheduled job (cron) periodically fetches flag definitions and writes to the cache
- Request handlers read from the cache and evaluate flags locally, with no API calls
This separates the concerns entirely. One process writes, all others read.
Cloudflare Workers example
A complete working example is available in the posthog-js repository. It uses the split read/write pattern described above. The worker's scheduled job writes flag definitions to KV, and request handlers read from it.
This pattern is ideal for high-traffic edge applications where flag evaluation must be extremely fast and you can tolerate flag updates being slightly delayed.