Monitor and debug your AI products
Analyze traces, spans, per-user costs, latency, and more
uses LLM analytics with session replays (and everything else)
"PostHog is amazing. It reins in the chaos to have everything in one place. Otherwise it’s quite overwhelming to try and understand what’s working and what’s not"
compared us to every other observability tool, just to be sure
"If you're building a new product, just use PostHog. It's a no-brainer. It's the only all-in-one platform like it for developers."
Generation tracking
Monitor generation events and prompts, with autocaptureTrace monitoring
Follow the full user interaction, including all generationsCost reporting
Keep an eye on overall costs, or break it down by model, user, and moreUsers tracking
Breakdown every interaction on an individual user basisLatency monitoring
Understand latency over time and how models impact performance
Privacy mode: If you want high-level metrics (like cost and performance) without exposing user conversations to your team, you can enable this in your code.
Ready-made dashboards
Use ready-made dashboards for tracking on a per model or per user
Latency alerts
Get alerts when latency exceeds a threshold, or when it spikes for a specific model.
Cost tracking
Monitor your per-user costs, and combine with revenue analytics for more insights
Prompt playground
Test new models against each other, simulate different chat histories, and test different reasoning levels – all inside PostHog.
Works with other PostHog products
You can use LLM analytics by itself, but the magic comes when you use it with other tools and products from PostHog.
Session replay
Watch session recordings to see what users see when interacting with your AI productFeature flags
See how specific changes impact your metrics, deploy changes to certain users onlyError tracking
Correlate app errors with specific sessions and prompt responses
We use LLM analytics, too.
We've used LLM analytics heavily while building Max AI, our in-app AI product manager. At the start it helped us keep an eye on operational costs so we could find out what sustainable pricing may look like. When we launched into beta we also monitored traces to see how Max was being used and gather feedback.
Having a bespoke tool to track traces has been invaluable for the Max AI team because of how we can connect traces to other data. The ability to easily jump from a trace to a session replay or person profile, for example, lets us see how users interact with Max in real time and understand their wider context too.
"The best thing about LLM analytics for us is how it connects with our other tools, like session replays and feature flags. That's something no other tool can do, because they focus on a narrower scope."
Answer all of these questions (and more) with PostHog LLM analytics.
Usage-based pricing
Use LLM analytics free. Or enter a credit card for advanced features.
Either way, your first 100,000 events are free – every month.
Free
No credit card required
All other plans
All features, no limitations
Events
100,000/mo
Unlimited
Features
Data retention
LLM analytics dashboard
Traces
Generations
Spans
LLM playground
Python SDK
Node.js SDK
Cost tracking
Token usage
Latency monitoring
Product analytics integration
Error tracking integration
Session replay integration
Privacy mode
Custom properties
Monthly pricing
First 100k events
Free
100k+
$0.00006/event
FAQs
PostHog vs...
So, what's best for you?
Reasons a competitor may be best for you (for now...)
- You don't need any product insights and only want to track operational metrics
- You're building a mobile specific product and need deep mobile support
- You don't want to use an open source product
Reasons to choose
- You want to understand LLM costs on a per user basis, in addition to other axes
- You want to combine LLM analytics with other tools, including error tracking and session replays
- You need easy regulatory compliance for HIPAA and GDPR
Have questions about PostHog?
Ask the community or book a demo.
Featured tutorials
Visit the tutorials section for more.
How to set up LLM analytics for Cohere
Tracking your Cohere usage, costs, and latency is crucial to understanding how your users are interacting with your AI and LLM-powered features.
How to set up LLM analytics for Anthropic's Claude
In this tutorial, we'll build a basic Next.js app, implement the Claude API, and capture these events automatically using PostHog's LLM analytics product.
How to monitor LlamaIndex with Langfuse and PostHog
LlamaIndex is a powerful framework for connecting LLMs with external data sources. Combin PostHog with Langfuse to easily monitor your LLM app.
How to set up OpenAI analytics
Let's explore how add and track the generate API route, then view generation data in PostHog.
Explore the docs
Get a more technical overview of how everything works in our docs.
LLM analytics
Getting started
Concepts
Guides
Resources
Meet the team
PostHog works in small teams. The LLM Analytics team is responsible for building LLM analytics.
(Shockingly, this team prefers their pizza without pineapple.)
Roadmap & changelog
Here's what the team is up to.
Latest update
Aug 2025
LLM analytics is out of beta
Emerging from beta like a hedgehog from hibernation, LLM analytics is now ready for primetime. In fact, teams like Lovable are already using it at scale debug their traces, monitor performance, track costs, and even test different models against each other.
Testing models against each other is especially cool because it uses the playground feature, which was based on a suggestion from Lovable's engineers. The playground enables you to recreate or manufacture new traces with full visibility of the models inner machinations.
Now it's out of beta, LLM analytics has everything you'd expect from a PostHog product launch, including a massive 100,000 free events every month and usage-based pricing that doesn't rely on per-seat penalties.
In short, LLM analytics is (to quote Lovable's engineers) "super cool".
Up next
Check out the company roadmap to see what we're working on next!
Pairs great with...
PostHog products are natively designed to be interoperable using Product OS.

"PostHog's LLM analytics saved us so much time. We used to use a whole system of tools to track the prompts and responses for debugging and this is an infinitely better UI. We use it for every single AI experiment we run now — also, if you need another quote then let me know, because the whole team loves it!"
This is the call to action.
If nothing else has sold you on PostHog, hopefully these classic marketing tactics will.
PostHog Cloud
Digital download*
Notendorsed
by Kim K
*PostHog is a web product and cannot be installed by CD.
We did once send some customers a floppy disk but it was a Rickroll.