Analyze traces with PostHog AI

PostHog AI investigates LLM traces, analyzes costs, and monitors quality trends across your AI products using natural language. Ask about token usage, latency, errors, or costs – and PostHog AI queries your trace data to find answers.

How it works

PostHog AI can query across all your trace data, including:

  • Token usage and costs – total spend, cost per conversation, cost by model or feature
  • Latency – response times, slow calls, p95/p99 latencies
  • Errors – failed LLM calls, timeouts, rate limits
  • Model comparisons – side-by-side performance across GPT-4, Claude, Gemini, or any model you use

Built-in AI features for LLM analytics

PostHog also provides dedicated AI features that run automatically on your LLM data. Trace summarization generates AI-powered summaries of traces with flow diagrams and key findings, which is useful for understanding complex multi-step conversations without reading every message.

Sentiment classification classifies user messages as positive, neutral, or negative using a local ML model, helping you monitor user satisfaction across your AI features.

Try it

Select a prompt to try it out in the PostHog app:

Tips for better results

  • Filter by model – "Show costs for GPT-4o only" narrows results to a specific model
  • Set cost thresholds – "Find conversations that cost more than $0.50" helps you identify expensive calls
  • Name your features – If you've tagged traces with feature names, reference them directly (e.g. "costs for the search feature")
  • Ask about trends – "How has LLM spend changed over the past 30 days?" gives you a time-series view
  • Compare models – "Compare latency between GPT-4 and Claude for the chat feature" helps you evaluate model choices

Get started

To start analyzing traces with PostHog AI, set up PostHog AI.

Community questions

Was this page useful?

Questions about this page? or post a community question.