# Prompt management - Docs

**Prompt management is in beta**

Prompt management is currently in beta.

We'd love to [hear your feedback](https://app.posthog.com/home#panel=support%3Afeedback%3A%3Alow%3Atrue) as we develop this feature.

Prompt management lets you create and update LLM prompts directly in PostHog. When you use prompts through the SDK, they're fetched at runtime with caching and fallback support—so you can iterate on prompts without deploying code.

## Why use prompt management?

-   **Update prompts without code deploys** – Change prompts instantly from the PostHog UI
-   **Non-engineers can iterate** – Product and content teams can tweak prompts without touching code
-   **Track prompt usage** – Link prompts to generations to see which prompts drive which outputs
-   **Versioning** – Every change creates an immutable version you can view, compare, or restore
-   **(soon) A/B testing** – Support for testing different prompt variants using Experiments

## Creating prompts

1.  Navigate to **LLM analytics** > **Prompts**
2.  Click **New prompt**
3.  Enter a name for your prompt
4.  Write your prompt content, using `{{variables}}` for dynamic values
5.  Click **Create prompt**

This creates version 1 of your prompt. Each subsequent edit creates a new immutable version.

### Markdown preview

When viewing a prompt, markdown rendering is enabled by default, formatting your prompt text with headings, lists, bold, and other markdown elements. Click the markdown icon next to the **Prompt** label to toggle it off and view plain text. When editing a prompt, the view switches to plain text automatically.

### Prompt naming rules

-   Names are **immutable** after creation (cannot be changed)
-   Only letters, numbers, hyphens, and underscores allowed (`^[a-zA-Z0-9_-]+$`)
-   Names must be unique within your project

## Managing prompts via MCP

You can also manage prompts through the [PostHog MCP server](/docs/model-context-protocol.md) using AI agents like Claude Code, Cursor, or any MCP-connected tool.

The MCP server provides four prompt management tools:

| Tool | Description |
| --- | --- |
| prompt-list | List all team prompts with optional name filtering |
| prompt-get | Get a prompt by name, including full content |
| prompt-create | Create a new prompt with a unique name and content |
| prompt-update | Update an existing prompt's content by name |

This enables teams to manage prompts programmatically from agent workflows without using the web UI.

## Template variables

Use double curly braces to define variables in your prompts:

text

PostHog AI

```text
You are a helpful assistant for {{company_name}}.
The user's name is {{user_name}} and their subscription tier is {{tier}}.
```

Variable names can include letters, numbers, underscores, hyphens, and dots.

## Versioning

Every prompt change creates a new immutable version. Previous versions are preserved and accessible.

### How versioning works

-   The first save creates version 1
-   Each subsequent publish increments the version number
-   Previous versions are never modified
-   By default, the SDK fetches the latest version

### Publish a new version

1.  Open a prompt and click **Edit latest**
2.  Make your changes
3.  Click **Publish version**

If someone else published a new version while you were editing, you'll see a conflict error. Refresh the page to load the latest version and try again.

### Restore a previous version

1.  Open a prompt and select a version from the **Version history** sidebar
2.  Click **Use as latest**
3.  Edit the prompt content if needed
4.  Click **Publish version**

This publishes the old content as a new version. The original version remains unchanged.

### Compare versions

Compare the content of two prompt versions side-by-side to see what changed. This is available for prompts with two or more versions.

To compare versions:

1.  Open a prompt and click **Compare versions** next to the **Prompt** label
2.  The diff view shows the current version against the previous version by default
3.  Use the version dropdown to change the comparison target

You can also click the compare icon on any version in the **Version history** sidebar to compare it with the currently selected version. The comparison target is highlighted with a **Comparing** tag in the sidebar.

Unchanged regions are automatically collapsed in the diff view. Click **Compare versions** again to exit the diff view.

### Archive a prompt

Click **Archive** to remove a prompt from active use. This archives all versions of the prompt. Any code fetching the prompt by name stops resolving it.

## Using prompts in code

### Prerequisites

-   **Personal API key (`phx_...`)** – Used as Bearer auth for prompt fetches
-   **Project Token (`phc_...`)** – Used as `token` query param so prompt reads are resolved deterministically to the right project
-   **PostHog SDK** – Install the Python or JavaScript SDK with the AI package

When you initialize `Prompts` with a PostHog client, the SDK uses the client's project token automatically. If you initialize `Prompts` directly, you must pass both keys. Use your app host (for example, `https://us.posthog.com` or `https://eu.posthog.com`) for prompt reads, not an ingestion host like `https://us.i.posthog.com`.

### Python

Python

PostHog AI

```python
from posthog import Posthog
from posthog.ai.prompts import Prompts
# Initialize with PostHog client
posthog = Posthog(
    '<your_project_api_key>',
    host='https://us.posthog.com',
    personal_api_key='<your_personal_api_key>'
)
prompts = Prompts(posthog)
# Or initialize directly
prompts = Prompts(
    personal_api_key='<your_personal_api_key>',
    project_api_key='<your_project_api_key>',
    host='https://us.posthog.com'
)
# Fetch the latest version of a prompt
template = prompts.get(
    'support-system-prompt',
    cache_ttl_seconds=600,  # Override default 5-minute cache
    fallback='You are a helpful assistant.'  # Used if fetch fails
)
# Or fetch a specific version
template = prompts.get(
    'support-system-prompt',
    version=2,
    fallback='You are a helpful assistant.'
)
# Compile with variables
system_prompt = prompts.compile(template, {
    'company': 'Acme Corp',
    'tier': 'premium',
    'user_name': 'Alice'
})
# Use in your LLM call
# ... your OpenAI/Anthropic call here
```

### JavaScript/TypeScript

typescript

PostHog AI

```typescript
import { Prompts } from '@posthog/ai'
import { PostHog } from 'posthog-node'
// Initialize with PostHog client
const posthog = new PostHog('<your_project_api_key>', {
  host: 'https://us.posthog.com',
  personalApiKey: '<your_personal_api_key>'
})
const prompts = new Prompts({ posthog })
// Or initialize directly (without PostHog client)
const prompts = new Prompts({
  personalApiKey: '<your_personal_api_key>',
  projectApiKey: '<your_project_api_key>',
  host: 'https://us.posthog.com'
})
// Fetch the latest version of a prompt
const template = await prompts.get('support-system-prompt', {
  cacheTtlSeconds: 600,
  fallback: 'You are a helpful assistant.'
})
// Or fetch a specific version
const template = await prompts.get('support-system-prompt', {
  version: 2,
  fallback: 'You are a helpful assistant.'
})
// Compile with variables
const systemPrompt = prompts.compile(template, {
  company: 'Acme Corp',
  tier: 'premium',
  userName: 'Alice'
})
// Use in your LLM call
// ... your OpenAI/Anthropic call here
```

## Caching

Prompts are cached on the SDK side to minimize latency and API calls:

-   **Default TTL**: 5 minutes (300 seconds)
-   **Configurable per-request**: Override with `cache_ttl_seconds` (Python) or `cacheTtlSeconds` (JS)
-   **Stale-while-revalidate**: If a fetch fails, the cached value is used (even if expired)
-   **Fallback support**: Provide a fallback value that's used when both fetch and cache fail

## Linking prompts to traces

To track which prompts and versions are used in which generations, include the `$ai_prompt_name` and `$ai_prompt_version` properties when capturing:

PostHog AI

### Python

```python
response = client.chat.completions.create(
    model="gpt-5",
    messages=[{"role": "system", "content": system_prompt}],
    posthog_properties={
        "$ai_prompt_name": "support-system-prompt",
        "$ai_prompt_version": 2
    }
)
```

### typescript

```typescript
const response = await openai.chat.completions.create({
    model: "gpt-5",
    messages: [{ role: "system", content: systemPrompt }],
    posthogProperties: {
        $ai_prompt_name: "support-system-prompt",
        $ai_prompt_version: 2
    }
})
```

Once linked, you can:

-   Filter generations by prompt name and version
-   View related traces from the prompt detail page
-   See which specific version was used in each trace
-   Analyze which prompt versions perform best

## Limits

-   **Maximum prompt size** - 1MB per prompt
-   **Maximum versions per prompt** - 2,000

### Community questions

Ask a question

### Was this page useful?

HelpfulCould be better