Generations

Last updated:

|Edit this page|

On this page

Generations are an event that capture an LLM request. The generations tab lists them along with the properties autocaptured by PostHog like the person, model, total cost, token usage, and more.

When you expand a generation, it includes the properties, metadata, a conversation history, the role (system, user, assistant), input content, and output content.

LLM generations

Event properties

A generation is a single call to an LLM.

Event Name: $ai_generation

PropertyDescription
$ai_trace_idThe trace ID (a UUID to group AI events) like conversation_id
Must contain only letters, numbers, and special characters: -, _, ~, ., @, (, ), !, ', :, |
$ai_modelThe model used
Example: gpt-3.5-turbo
$ai_providerThe LLM provider
Example: openai, anthropic, gemini
$ai_inputList of messages sent to the LLM
Example: [{"role": "user", "content": [{"type": "text", "text": "What's in this image?"}, {"type": "image", "image": "https://example.com/image.jpg"}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]
$ai_input_tokensThe number of tokens in the input (often found in response.usage)
$ai_output_choicesList of response choices from the LLM
Example: [{"role": "assistant", "content": [{"type": "text", "text": "I can see a hedgehog in the image."}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]
$ai_output_tokensThe number of tokens in the output (often found in response.usage)
$ai_latencyOptional
The latency of the LLM call in seconds
$ai_http_statusOptional
The HTTP status code of the response
$ai_base_urlOptional
The base URL of the LLM provider
Example: https://api.openai.com/v1
$ai_request_urlOptional
The full URL of the request made to the LLM API
Example: https://api.openai.com/v1/chat/completions
$ai_is_errorOptional
Boolean to indicate if the request was an error
$ai_errorOptional
The error message or object
Cost PropertiesOptional - If not provided, costs will be calculated automatically from token counts
$ai_input_cost_usdOptional
The cost in USD of the input tokens
$ai_output_cost_usdOptional
The cost in USD of the output tokens
$ai_total_cost_usdOptional
The total cost in USD (input + output)
Cache Properties
$ai_cache_read_input_tokensOptional
Number of tokens read from cache
$ai_cache_creation_input_tokensOptional
Number of tokens written to cache (Anthropic-specific)
Model Parameters
$ai_temperatureOptional
Temperature parameter used in the LLM request
$ai_streamOptional
Whether the response was streamed
$ai_max_tokensOptional
Maximum tokens setting for the LLM response
$ai_toolsOptional
Tools/functions available to the LLM
Example: [{"type": "function", "function": {"name": "get_weather", "parameters": {...}}}]
Span/Trace Properties
$ai_span_idOptional
Unique identifier for this generation
$ai_span_nameOptional
Name given to this generation
Example: summarize_text
$ai_parent_idOptional
Parent span ID for tree view grouping

Example

Terminal
curl -X POST "https://us.i.posthog.com/i/v0/e/" \
-H "Content-Type: application/json" \
-d '{
"api_key": "<ph_project_api_key>",
"event": "$ai_generation",
"properties": {
"distinct_id": "user_123",
"$ai_trace_id": "d9222e05-8708-41b8-98ea-d4a21849e761",
"$ai_model": "gpt-4o",
"$ai_provider": "openai",
"$ai_input": [{"role": "user", "content": [{"type": "text", "text": "Analyze this data and suggest improvements"}]}],
"$ai_input_tokens": 150,
"$ai_output_choices": [{"role": "assistant", "content": [{"type": "text", "text": "Based on the data, here are my suggestions..."}]}],
"$ai_output_tokens": 280,
"$ai_latency": 2.45,
"$ai_http_status": 200,
"$ai_base_url": "https://api.openai.com/v1",
"$ai_request_url": "https://api.openai.com/v1/chat/completions",
"$ai_is_error": false,
"$ai_temperature": 0.7,
"$ai_stream": false,
"$ai_max_tokens": 500,
"$ai_tools": [{"type": "function", "function": {"name": "analyze_data", "description": "Analyzes data and provides insights", "parameters": {"type": "object", "properties": {"data_type": {"type": "string"}}}}}],
"$ai_cache_read_input_tokens": 50,
"$ai_span_name": "data_analysis_chat"
},
"timestamp": "2025-01-30T12:00:00Z"
}'

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Traces

Traces are a collection of generations and spans that capture a full interaction between a user and an LLM. The traces tab lists them along with the properties autocaptured by PostHog like the person, total cost, total latency, and more. Clicking on a trace opens a timeline of the interaction with all the generation and span events enabling you to see the entire conversation, details about the trace, and the individual generation and span events. Event properties

Read next article