Custom properties

Last updated:

|

Custom properties in LLM analytics enable you to add metadata to your AI generations, making it easier to filter, analyze, and understand your LLM usage patterns.

This guide shows you how to set custom properties using PostHog's LLM analytics SDKs and leverage them for better observability. For specific integration patterns, see our guides on linking to session replay and linking to error tracking.

Why use custom properties?

Custom properties help you:

  • Filter traces by specific criteria (e.g., subscription tier, feature flags, account settings)
  • Track prompt versions to measure improvements over time
  • Link backend LLM events to session replays and error tracking
  • Group related generations by custom business logic (e.g, sessions, conversations, tenants )
  • Monitor costs by user segments or features

Setting custom properties

You can add custom properties to any LLM generation using the posthogProperties parameter (JavaScript) or posthog_properties parameter (Python). These properties will appear in the $ai_generation event alongside the automatically captured metrics in your PostHog dashboard.

Basic example

import { OpenAI } from '@posthog/ai'
import { PostHog } from 'posthog-node'
const phClient = new PostHog(
'<ph_project_api_key>',
{ host: 'https://us.i.posthog.com' }
)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
posthog: phClient
})
const response = await openai.responses.create({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Hello' }],
posthogProperties: {
customProperty: 'customValue',
conversationId: 'conv_abc123',
subscriptionTier: 'premium',
feature: 'chatAssistant'
}
})

Common use cases

1. Subscription tier tracking

Track LLM usage by subscription tier or payment plan to monitor costs and usage patterns:

const getSubscriptionTier = (userId) => {
// Your logic to determine the user subscription tier
return user.subscription?.tier || 'free'
}
const response = await openai.responses.create({
model: 'gpt-5',
messages: messages,
posthogDistinctId: userId,
posthogProperties: {
subscriptionTier: getSubscriptionTier(userId),
monthlyUsage: user.currentMonthUsage,
rateLimited: user.isRateLimited
}
})

2. Prompt versioning

Track different versions of your prompts to measure improvements:

const PROMPT_VERSION = "v2.3.1"
const PROMPT_ID = "customer_support_agent"
const systemPrompt = getPromptTemplate(PROMPT_ID, PROMPT_VERSION)
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-0',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage }
],
posthogProperties: {
prompt_id: PROMPT_ID,
prompt_version: PROMPT_VERSION,
prompt_tokens: systemPrompt.length,
experiment_variant: 'detailed_instructions'
}
})

3. Custom generation names

Set meaningful names for your LLM generations to improve trace readability:

// For Vercel AI SDK
import { withTracing } from '@posthog/ai'
import { generateText } from 'ai'
const model = withTracing(
openaiClient('gpt-5'),
phClient,
{
posthogProperties: {
$ai_span_name: "Generate Product Description",
product_category: "electronics",
target_length: "short"
}
}
)
const { text } = await generateText({
model: model,
prompt: `Write a product description for: ${productName}`
})

The $ai_span_name property will appear as the primary label in your trace visualization, making it easier to identify specific operations.

Filtering in the dashboard

Once you've set custom properties, they appear in the PostHog LLM analytics dashboard where you can:

  1. Filter generations by any custom property
  2. Create insights based on custom properties
  3. Build dashboards segmented by your custom fields

For example, after setting a conversation_id property, you can:

  • Filter the generations table to show only events from a specific conversation
  • Create a funnel to track conversation completion rates
  • Build a dashboard showing average cost per conversation by user subscription tier

Your custom properties will appear in the event details panel alongside the automatically captured properties like model, tokens, and latency.

Questions? Ask Max AI.

It's easier than reading through 799 pages of documentation

Community questions

Was this page useful?

Next article

Link session replay

Connecting your backend LLM events to frontend session replays provides complete visibility into the user journey, helping you understand the full context around AI interactions in your application. Why link to session replay? By including session IDs in your LLM events, you can: See the full user journey : Navigate from an LLM trace directly to the session replay to see user actions before, during, and after AI interactions Debug issues faster : Quickly find and watch the exact session where…

Read next article