LLM traces and generations (beta)

Last updated:

|Edit this page

On this page

Once you install PostHog's LLM observabilty SDK, it autocaptures LLM generations and traces. You can then view these in PostHog.

Generations

Generations are an event that capture an LLM request. The generations tab lists them along with the properties autocaptured by the PostHog like the person, model, total cost, token usage, and more.

When you expand a generation, it includes the properties and metadata that every event has along with a conversation history with the role (system, user, assistant) as well as input and output content.

LLM generations

Traces

Traces are a collection of generations that capture a full interaction between a user and an LLM. The traces tab lists them along with the properties autocaptured by the PostHog like the person, total cost, total latency, and more.

Clicking on a trace opens a timeline of the interaction with all the generation events enabling you to see the entire conversation, details about the trace, and the individual generation events.

LLM generations

Questions? Ask Max AI.

It's easier than reading through 573 docs articles.

Community questions

Was this page useful?

Next article

Tutorials and guides

Got a question which isn't answered below? Head to the community forum to let us know! How to setup PostHog for AI How to set up LLM analytics for Cohere How to set up LLM analytics for Anthropic's Claude How to set up LLM analytics for ChatGPT How to monitor generative AI calls to AWS Bedrock How to compare AWS Bedrock prompts How to compare AWS Bedrock foundational models How to monitor LlamaIndex apps with Langfuse and PostHog Using LLMs in analytics Product metrics to…

Read next article