LLM analytics

Monitor and debug your AI products

Analyze traces, spans, per-user costs, latency, and more

Screenshot of llm analytics in PostHog
  • uses LLM analytics with session replays (and everything else)

    "PostHog is amazing. It reins in the chaos to have everything in one place. Otherwise it’s quite overwhelming to try and understand what’s working and what’s not"

    Read the story
  • compared us to every other observability tool, just to be sure

    "If you're building a new product, just use PostHog. It's a no-brainer. It's the only all-in-one platform like it for developers."

    Read the story
  • Generation tracking

    Monitor generation events and prompts, with autocapture

  • Trace monitoring

    Follow the full user interaction, including all generations

  • Cost reporting

    Keep an eye on overall costs, or break it down by model, user, and more

  • Users tracking

    Breakdown every interaction on an individual user basis

  • Latency monitoring

    Understand latency over time and how models impact performance

Privacy mode: If you want high-level metrics (like cost and performance) without exposing user conversations to your team, you can enable this in your code.

  • Ready-made dashboards

    Use ready-made dashboards for tracking on a per model or per user

  • Latency alerts

    Get alerts when latency exceeds a threshold, or when it spikes for a specific model.

  • Cost tracking

    Monitor your per-user costs, and combine with revenue analytics for more insights

Prompt playground

Test new models against each other, simulate different chat histories, and test different reasoning levels – all inside PostHog.

Works with other PostHog products

You can use LLM analytics by itself, but the magic comes when you use it with other tools and products from PostHog.

  • Session replay

    Watch session recordings to see what users see when interacting with your AI product

  • Feature flags

    See how specific changes impact your metrics, deploy changes to certain users only

  • Error tracking

    Correlate app errors with specific sessions and prompt responses

We use LLM analytics, too.

We've used LLM analytics heavily while building Max AI, our in-app AI product manager. At the start it helped us keep an eye on operational costs so we could find out what sustainable pricing may look like. When we launched into beta we also monitored traces to see how Max was being used and gather feedback.

Having a bespoke tool to track traces has been invaluable for the Max AI team because of how we can connect traces to other data. The ability to easily jump from a trace to a session replay or person profile, for example, lets us see how users interact with Max in real time and understand their wider context too.

"The best thing about LLM analytics for us is how it connects with our other tools, like session replays and feature flags. That's something no other tool can do, because they focus on a narrower scope."
Michael, LLM wizard
Michael Matloka
Product Engineer at PostHog

Usage-based pricing

Use LLM analytics free. Or enter a credit card for advanced features. Either way, your first 100,000 events are free – every month.

Free

No credit card required

All other plans

All features, no limitations

Events

100,000/mo

Unlimited

Features

Data retention
Included
Included
LLM analytics dashboard
Included
Included
Traces
Included
Included
Generations
Included
Included
Spans
Included
Included
LLM playground
Included
Included
Python SDK
Included
Included
Node.js SDK
Included
Included
Cost tracking
Included
Included
Token usage
Included
Included
Latency monitoring
Included
Included
Product analytics integration
Included
Included
Error tracking integration
Included
Included
Session replay integration
Included
Included
Privacy mode
Included
Included
Custom properties
Included
Included

Monthly pricing

First 100k events
Free
100k+
$0.00006/event

FAQs

PostHog vs...

Langfuse
Langsmith
Helicone
Generation tracking
Latency tracking
Cost tracking, incl. cost-per-user
Trace visualization
Token tracking
Prompt playground
Prompt evaluations
Alerting
SOC 2 compliance
HIPAA and GDPR compliance

So, what's best for you?

Reasons a competitor may be best for you (for now...)

  • You don't need any product insights and only want to track operational metrics
  • You're building a mobile specific product and need deep mobile support
  • You don't want to use an open source product

Reasons to choose

  • You want to understand LLM costs on a per user basis, in addition to other axes
  • You want to combine LLM analytics with other tools, including error tracking and session replays
  • You need easy regulatory compliance for HIPAA and GDPR

Have questions about PostHog?
Ask the community or book a demo.

Featured tutorials

Visit the tutorials section for more.

  • How to set up LLM analytics for Cohere

    Tracking your Cohere usage, costs, and latency is crucial to understanding how your users are interacting with your AI and LLM-powered features.

    Read more
  • How to set up LLM analytics for Anthropic's Claude

    In this tutorial, we'll build a basic Next.js app, implement the Claude API, and capture these events automatically using PostHog's LLM analytics product.

    Read more
  • How to monitor LlamaIndex with Langfuse and PostHog

    LlamaIndex is a powerful framework for connecting LLMs with external data sources. Combin PostHog with Langfuse to easily monitor your LLM app.

    Read more
  • How to set up OpenAI analytics

    Let's explore how add and track the generate API route, then view generation data in PostHog.

    Read more

Explore the docs

Get a more technical overview of how everything works in our docs.

Meet the team

PostHog works in small teams. The LLM Analytics team is responsible for building LLM analytics.

(Shockingly, this team prefers their pizza without pineapple.)

Roadmap & changelog

Here's what the team is up to.

Latest update

Aug 2025

LLM analytics is out of beta

Emerging from beta like a hedgehog from hibernation, LLM analytics is now ready for primetime. In fact, teams like Lovable are already using it at scale debug their traces, monitor performance, track costs, and even test different models against each other.

Testing models against each other is especially cool because it uses the playground feature, which was based on a suggestion from Lovable's engineers. The playground enables you to recreate or manufacture new traces with full visibility of the models inner machinations.

Now it's out of beta, LLM analytics has everything you'd expect from a PostHog product launch, including a massive 100,000 free events every month and usage-based pricing that doesn't rely on per-seat penalties.

In short, LLM analytics is (to quote Lovable's engineers) "super cool".

Up next

Check out the company roadmap to see what we're working on next!

Pairs great with...

PostHog products are natively designed to be interoperable using Product OS.

Chris Raroque
"PostHog's LLM analytics saved us so much time. We used to use a whole system of tools to track the prompts and responses for debugging and this is an infinitely better UI. We use it for every single AI experiment we run now — also, if you need another quote then let me know, because the whole team loves it!"
Founder and YouTuber

This is the call to action.

If nothing else has sold you on PostHog, hopefully these classic marketing tactics will.

Eco-friendly

PostHog Cloud

Digital download*

PostHog Cloud
People on G2 think we're great

Notendorsed
by Kim K

*PostHog is a web product and cannot be installed by CD.
We did once send some customers a floppy disk but it was a Rickroll.

  • Select your cloud
  • Starts at:
    $0Free>1 left at this price!!

Hurry: Tons of companies signed up . Act now and get $0 off your first order.