Getting started with LLM analytics

Last updated:

|Edit this page|

Capture LLM conversations

LLM analytics gives you x-ray vision into your LLM applications. You can track:

  • 🗣️ Every conversation (inputs, outputs, and tokens)
  • 🤖 Model performance (cost, latency and error rates)
  • 🔍 Full traces for when you need to go detective mode
  • 💰 How much each chat/user/organization is costing you

The first step is to install a PostHog SDK to capture conversations, requests, and responses from an LLM provider.

Platforms

Beta 🚧

LLM analytics is currently considered in beta. To access it, enable the feature preview in your PostHog account.

We are keen to gather as much feedback as possible so if you try this out please let us know. You can email peter@posthog.com and radu@posthog.com, send feedback via the in-app support panel, or use one of our other support options.

Install PostHog SDK

Record generations

Once you've installed the SDK, every LLM call automatically becomes a generation – a detailed record of what went in and what came out. Each generation captures:

  • 📝 Complete conversation context (inputs and outputs)
  • 🔢 Token counts and usage metrics
  • ⏱️ Response latency and performance data
  • 💸 Automatic cost calculation based on model pricing
  • 🔗 Trace IDs to group related LLM calls together

PostHog's SDK wrappers handle all the heavy lifting. Use your LLM provider as normal and we'll capture everything automatically.

Learn about generations

Evaluate model usage

PostHog's LLM analytics dashboard provides a comprehensive overview of your LLM performance. Break usage metrics down by model, latency, cost, and more.

LLM observability dashboard
Analyze LLM performance

Integrate customer data

Take advantage of PostHog's platform to integrate your customer data with LLM analytics.

Product analytics

All LLM analytics are captured as standard PostHog events, which means you can create dashboards, trends, funnels, custom SQL queries, alerts, and more.

LLM observability dashboard

Error tracking

LLM generated errors are automatically captured in PostHog's error tracking for you to monitor, debug, and resolve.

LLM analytics error tracking

Session replay

Watch session replays to see exactly how users interact with your LLM features.

LLM analytics session replay

Use for free

LLM analytics is currently in beta. Events are currently priced the same as regular PostHog data events, which comes with a generous free tier and transparent usage-based pricing.

No credit card required to start. To access LLM analytics, enable the feature preview in your PostHog account.


That's it! You're ready to start integrating.

Install LLM analytics

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Install LLM analytics with PostHog

Choose your platform below to get started with LLM analytics installation.

Read next article