Spans

Last updated:

|Edit this page|

On this page

Spans are individual operations within your LLM application like function calls, vector searches, or data retrieval steps. They provide granular visibility into your application's execution flow beyond just LLM calls.

While generations capture LLM interactions and traces group related operations, spans track atomic operations that make up your workflow:

  • Vector database searches - Document and embedding retrieval
  • Tool/function calls - API calls, calculations, database queries
  • RAG pipeline steps - Retrieval, reranking, context building
  • Data processing - Validation, chunking, formatting

For technical implementation details, see manual capture.

Event properties

A span is a single action within your application, such as a function call or vector database search.

Event Name: $ai_span

PropertyDescription
$ai_trace_idThe trace ID (a UUID to group related AI events together)
Must contain only letters, numbers, and the following characters: -, _, ~, ., @, (, ), !, ', :, |
Example: d9222e05-8708-41b8-98ea-d4a21849e761
$ai_input_stateThe input state of the span
Example: {"query": "search for documents about hedgehogs"} or any JSON-serializable state
$ai_output_stateThe output state of the span
Example: {"results": ["doc1", "doc2"], "count": 2} or any JSON-serializable state
$ai_latencyOptional
The latency of the span in seconds
Example: 0.361
$ai_span_nameOptional
The name of the span
Example: vector_search, data_retrieval, tool_call
$ai_span_idOptional
Unique identifier for this span
Example: bdf42359-9364-4db7-8958-c001f28c9255
$ai_parent_idOptional
Parent ID for tree view grouping (trace_id or another span_id)
Example: 537b7988-0186-494f-a313-77a5a8f7db26
$ai_is_errorOptional
Boolean to indicate if the span encountered an error
$ai_errorOptional
The error message or object if the span failed
Example: {"message": "Connection timeout", "code": "TIMEOUT"}

Example

Terminal
curl -X POST "https://us.i.posthog.com/i/v0/e/" \
-H "Content-Type: application/json" \
-d '{
"api_key": "<ph_project_api_key>",
"event": "$ai_span",
"properties": {
"distinct_id": "user_123",
"$ai_trace_id": "d9222e05-8708-41b8-98ea-d4a21849e761",
"$ai_input_state": {"query": "search for documents about hedgehogs", "filters": {"category": "animals"}},
"$ai_output_state": {"results": [{"id": "doc_1", "content": "Hedgehogs are small mammals..."}, {"id": "doc_2", "content": "These nocturnal creatures..."}], "count": 2},
"$ai_latency": 0.145,
"$ai_span_name": "vector_search",
"$ai_span_id": "bdf42359-9364-4db7-8958-c001f28c9255",
"$ai_parent_id": "537b7988-0186-494f-a313-77a5a8f7db26",
"$ai_is_error": false
},
"timestamp": "2025-01-30T12:00:00Z"
}'

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Embeddings

Embeddings are calls to embedding models that convert text into vector representations for semantic search, RAG pipelines, and similarity matching. While generations track LLM conversations and spans track operations, embeddings specifically monitor vector generation: Search queries - Converting user input for semantic search Document indexing - Embedding content for retrieval RAG pipelines - Query and document vectorization Batch processing - Multiple embeddings in one call For…

Read next article