Traces

Last updated:

|Edit this page|

On this page

Traces are a collection of generations and spans that capture a full interaction between a user and an LLM. The traces tab lists them along with the properties autocaptured by PostHog like the person, total cost, total latency, and more.

Clicking on a trace opens a timeline of the interaction with all the generation and span events enabling you to see the entire conversation, details about the trace, and the individual generation and span events.

LLM traces

Event properties

A trace is a group that contains multiple spans, generations, and embeddings. Traces can be manually sent as events or appear as pseudo-events automatically created from child events.

Event Name: $ai_trace

PropertyDescription
$ai_trace_idThe trace ID (a UUID to group related AI events together)
Must contain only letters, numbers, and special characters: -, _, ~, ., @, (, ), !, ', :, |
Example: d9222e05-8708-41b8-98ea-d4a21849e761
$ai_input_stateThe input of the whole trace
Example: [{"role": "user", "content": "What's the weather in SF?"}] or any JSON-serializable state
$ai_output_stateThe output of the whole trace
Example: [{"role": "assistant", "content": "The weather in San Francisco is..."}] or any JSON-serializable state
$ai_latencyOptional
The latency of the trace in seconds
$ai_span_nameOptional
The name of the trace
Example: chat_completion, rag_pipeline
$ai_is_errorOptional
Boolean to indicate if the trace encountered an error
$ai_errorOptional
The error message or object if the trace failed

Pseudo-trace Events

When you send generation ($ai_generation), span ($ai_span), or embedding ($ai_embedding) events with a $ai_trace_id, PostHog automatically creates a pseudo-trace event that appears in the dashboard as a parent grouping. These pseudo-traces:

  • Are not actual events in your data
  • Automatically aggregate metrics from child events (latency, tokens, costs)
  • Provide a hierarchical view of your AI operations
  • Do not require sending an explicit $ai_trace event

This means you can either:

  1. Send explicit $ai_trace events to control the trace metadata
  2. Let PostHog automatically create pseudo-traces from your generation/span events

Example

Terminal
curl -X POST "https://us.i.posthog.com/i/v0/e/" \
-H "Content-Type: application/json" \
-d '{
"api_key": "<ph_project_api_key>",
"event": "$ai_trace",
"properties": {
"distinct_id": "user_123",
"$ai_trace_id": "d9222e05-8708-41b8-98ea-d4a21849e761",
"$ai_input_state": [{"role": "user", "content": "Tell me a fun fact about hedgehogs"}],
"$ai_output_state": [{"role": "assistant", "content": "Hedgehogs are small mammals with spines on their back."}],
"$ai_latency": 1.23,
"$ai_span_name": "hedgehog_facts_chat",
"$ai_is_error": false
},
"timestamp": "2025-01-30T12:00:00Z"
}'

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Spans

Spans are individual operations within your LLM application like function calls, vector searches, or data retrieval steps. They provide granular visibility into your application's execution flow beyond just LLM calls. While generations capture LLM interactions and traces group related operations, spans track atomic operations that make up your workflow: Vector database searches - Document and embedding retrieval Tool/function calls - API calls, calculations, database queries RAG pipeline…

Read next article