# Getting started with LLM analytics - Docs

## Capture LLM conversations

LLM analytics gives you x-ray vision into your LLM applications. You can track:

-   🗣️ Every conversation (inputs, outputs, and tokens)
-   🤖 Model performance (cost, latency and error rates)
-   🔍 Full traces and sessions for when you need to go detective mode
-   💰 How much each chat/user/organization is costing you
-   🔗 Multi-conversation sessions across user visits

> **New to LLM observability?** See [core concepts](/docs/llm-analytics/basics.md) for a primer on events, tokens, and traces.

The first step is to install a PostHog SDK to capture conversations, requests, and responses from an LLM provider.

### Platforms

-   [Anthropic](/docs/llm-analytics/installation/anthropic.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/autogen_7747bfd3ae.svg)AutoGen](/docs/llm-analytics/installation/autogen.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/bedrock_5c06698148.png)AWS Bedrock](/docs/llm-analytics/installation/aws-bedrock.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/azure_openai_884ba0124a.svg)Azure OpenAI](/docs/llm-analytics/installation/azure-openai.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/cerebras_4d953c1c2b.png)Cerebras](/docs/llm-analytics/installation/cerebras.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/cohere_0ddf02d545.svg)Cohere](/docs/llm-analytics/installation/cohere.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/convex_d8dcddcd63.svg)Convex](/docs/llm-analytics/installation/convex.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/crewai_67ee9f5eb6.svg)CrewAI](/docs/llm-analytics/installation/crewai.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/deepseek_df02608124.svg)DeepSeek](/docs/llm-analytics/installation/deepseek.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/dspy_548bc2f255.webp)DSPy](/docs/llm-analytics/installation/dspy.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/fireworks_ai_00f8230668.svg)Fireworks AI](/docs/llm-analytics/installation/fireworks-ai.md)

-   [Google](/docs/llm-analytics/installation/google.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/groq_a0ed539e47.png)Groq](/docs/llm-analytics/installation/groq.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/helicone_7c41fa4c2d.svg)Helicone](/docs/llm-analytics/installation/helicone.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/hugging_face_ae0a406f62.svg)Hugging Face](/docs/llm-analytics/installation/hugging-face.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/instructor_1_e0cfec1572.svg)Instructor](/docs/llm-analytics/installation/instructor.md)

-   [LangChain](/docs/llm-analytics/installation/langchain.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/langgraph_e5fee77551.svg)LangGraph](/docs/llm-analytics/installation/langgraph.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/litellmicon_a2805d75e5.png)LiteLLM](/docs/llm-analytics/installation/litellm.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/llamaindex_f831132d7c.svg)LlamaIndex](/docs/llm-analytics/installation/llamaindex.md)

-   [Manual capture](/docs/llm-analytics/installation/manual-capture.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/mastra_1_c95e1520db.svg)Mastra](/docs/llm-analytics/installation/mastra.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/mirascope_33f38e04ea.svg)Mirascope](/docs/llm-analytics/installation/mirascope.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/mistral_551c75e2dd.svg)Mistral](/docs/llm-analytics/installation/mistral.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/ollama_a058ab8f46.svg)Ollama](/docs/llm-analytics/installation/ollama.md)

-   [OpenAI](/docs/llm-analytics/installation/openai.md)

-   [OpenAI Agents SDK](/docs/llm-analytics/installation/openai-agents.md)

-   [OpenClaw](/docs/llm-analytics/installation/openclaw.md)

-   [OpenRouter](/docs/llm-analytics/installation/openrouter.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/perplexity_1_cb4f143ce1.svg)Perplexity](/docs/llm-analytics/installation/perplexity.md)

-   [Pi Coding Agent](/docs/llm-analytics/installation/pi.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/portkey_15a83f0395.svg)Portkey](/docs/llm-analytics/installation/portkey.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/pydantic_ai_55029bc39b.svg)Pydantic AI](/docs/llm-analytics/installation/pydantic-ai.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/w_200,c_limit,q_auto,f_auto/semantic_kernel_5102489978.png)Semantic Kernel](/docs/llm-analytics/installation/semantic-kernel.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/smolagents_logo_1_545072ae17.png)smolagents](/docs/llm-analytics/installation/smolagents.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/together_ai_cdee2c04f2.svg)Together AI](/docs/llm-analytics/installation/together-ai.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/vercel_icon_svgrepo_com_b7e78b41f9.svg)Vercel AI SDK](/docs/llm-analytics/installation/vercel-ai.md)

-   [![](https://res.cloudinary.com/dmukukwp6/image/upload/xai_8ab1682d3c.svg)xAI](/docs/llm-analytics/installation/xai.md)

[Install PostHog SDK](/docs/llm-analytics/installation.md)

## Track AI generations

Once you've installed the SDK, every LLM call automatically becomes a [generation](/docs/llm-analytics/generations.md) – a detailed record of what went in and what came out. Each generation captures:

-   Complete conversation context (inputs and outputs)
-   Token counts and usage metrics
-   Response latency and performance data
-   Automatic cost calculation based on model pricing
-   Trace IDs to group related LLM calls together

PostHog's SDK wrappers handle all the heavy lifting. Use your LLM provider as normal and we'll capture everything automatically.

[Learn about generations](/docs/llm-analytics/generations.md)

## Evaluate model usage

PostHog's LLM analytics dashboard provides a comprehensive overview of your LLM performance. Break usage metrics down by model, latency, cost, and more.

![LLM observability dashboard](https://res.cloudinary.com/dmukukwp6/image/upload/llma_dashboard_c710e66b5e.png)![LLM observability dashboard](https://res.cloudinary.com/dmukukwp6/image/upload/llma_dashboard_dark_aef0f67baf.png)

[Analyze LLM performance](/docs/llm-analytics/dashboard.md)

## Integrate customer data

Take advantage of PostHog's [platform](/docs.md) to integrate your customer data with LLM analytics.

### Product analytics

All LLM analytics are captured as standard PostHog events, which means you can create dashboards, trends, funnels, custom SQL queries, alerts, and more.

![LLM observability dashboard](https://res.cloudinary.com/dmukukwp6/image/upload/llma_insights_da40edc407.png)![LLM observability dashboard](https://res.cloudinary.com/dmukukwp6/image/upload/llma_insights_dark_558f8f2cd8.png)

### Error tracking

LLM generated errors are automatically captured in PostHog's [error tracking](/docs/error-tracking.md) for you to monitor, debug, and resolve.

![LLM analytics error tracking](https://res.cloudinary.com/dmukukwp6/image/upload/llma_error_4edcb7d7a1.png)![LLM analytics error tracking](https://res.cloudinary.com/dmukukwp6/image/upload/llma_error_dark_a298d3f2b7.png)

### Session replay

Watch [session replays](/docs/session-replay.md) to see exactly how users interact with your LLM features.

![LLM analytics session replay](https://res.cloudinary.com/dmukukwp6/image/upload/llma_session_replay_95b9268668.png)![LLM analytics session replay](https://res.cloudinary.com/dmukukwp6/image/upload/llma_session_replay_dark_767332d926.png)

## Use for free

PostHog LLM analytics is designed to be cost-effective with a generous free tier and transparent usage-based pricing. Since we don't charge per seat, more than 90% of companies use PostHog for free.

### TL;DR 💸

-   No credit card required to start
-   First 100K LLM events per month are free with 30-day retention
-   Above 100k we have usage-based pricing at $0.00006/event
-   Set billing limits to avoid surprise charges
-   See our [pricing page](/pricing.md) for more up-to-date details

---

That's it! You're ready to start integrating.

[Install LLM analytics](/docs/llm-analytics/installation.md)

1/5

[**Capture LLM conversations** ***Required***](#quest-item-capture-llm-conversations)[**Track AI generations** ***Required***](#quest-item-track-ai-generations)[**Evaluate model usage** ***Recommended***](#quest-item-evaluate-model-usage)[**Integrate customer data** ***Recommended***](#quest-item-integrate-customer-data)[**Use for free** ***Free 100k events/mo***](#quest-item-use-for-free)

**Capture LLM conversations**

***Required***

### Community questions

Ask a question

### Was this page useful?

HelpfulCould be better