# Trace summarization - Docs

Trace summarization uses AI to generate human-readable summaries of your LLM [traces](/docs/llm-analytics/traces.md) and events. This helps you quickly understand complex multi-step AI interactions without reading through raw inputs and outputs.

## How it works

When viewing a trace or generation event, click the **Summary** tab to generate an AI-powered summary. The summary includes:

-   **Title**: A brief description of what the trace accomplished
-   **Flow diagram**: An ASCII visualization of the execution flow
-   **Summary points**: Key highlights and actions from the trace
-   **Interesting notes**: Notable observations like errors or unusual patterns

## Summarization modes

Choose between two summarization modes based on your needs:

| Mode | Description | Best for |
| --- | --- | --- |
| Minimal | Quick 3-5 bullet points with key highlights | Fast overview of what happened |
| Detailed | Comprehensive 5-10 points with full context | Deep understanding of complex traces |

## Requirements

### AI data analysis consent

Summarization requires AI data processing to be enabled for your organization. When you first use the feature, you'll be prompted to approve AI data processing. This consent applies organization-wide.

To manage this setting, go to **Settings** → **Organization** → **General** → **PostHog AI data analysis**.

### Rate limits

To ensure fair usage, summarization has the following rate limits:

| Limit | Value |
| --- | --- |
| Burst | 50 requests/minute |
| Sustained | 200 requests/hour |
| Daily cap | 500 requests/day |

Summaries are cached, so regenerating the same trace won't count against your limits unless you explicitly request a refresh.

## Providing feedback

After generating a summary, you can rate it using the thumbs up/down buttons. This feedback helps us improve the summarization quality.

### Community questions

Ask a question

### Was this page useful?

HelpfulCould be better