Convex LLM analytics installation
Contents
- 1
Install dependencies
RequiredInstall the PostHog AI package, the Vercel AI SDK, and the OpenTelemetry SDK.
- 2
Set environment variables
RequiredSet your PostHog project API key and host as Convex environment variables. You can find these in your project settings.
You also need your AI provider's API key (e.g.
OPENAI_API_KEY): - 3
Capture LLM events with OpenTelemetry
RequiredConvex actions run in a Node.js-compatible environment when you add the
"use node"directive. Create an action that initializes the OpenTelemetry SDK with PostHog's trace exporter and enables telemetry on your AI SDK calls.How this worksThe
PostHogTraceExportersends OpenTelemetrygen_ai.*spans to PostHog's OTLP ingestion endpoint. PostHog converts these into$ai_generationevents automatically. Theposthog_distinct_idmetadata field links events to a specific user. - 4
Using Convex Agent
OptionalIf you're using
@convex-dev/agent, passexperimental_telemetryto the agent'sgenerateTextcall:You can expect captured
$ai_generationevents to have the following properties:Property Description $ai_modelThe specific model, like gpt-5-miniorclaude-4-sonnet$ai_latencyThe latency of the LLM call in seconds $ai_time_to_first_tokenTime to first token in seconds (streaming only) $ai_toolsTools and functions available to the LLM $ai_inputList of messages sent to the LLM $ai_input_tokensThe number of tokens in the input (often found in response.usage) $ai_output_choicesList of response choices from the LLM $ai_output_tokensThe number of tokens in the output (often found in response.usage)$ai_total_cost_usdThe total cost in USD (input + output) [...] See full list of properties - 5
Next steps
RecommendedNow that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.
Resource Description Basics Learn the basics of how LLM calls become events in PostHog. Generations Read about the $ai_generationevent and its properties.Traces Explore the trace hierarchy and how to use it to debug LLM calls. Spans Review spans and their role in representing individual operations. Anaylze LLM performance Learn how to create dashboards to analyze LLM performance.

