OpenClaw LLM analytics installation
Contents
OpenClaw is a self-hosted AI assistant gateway that connects messaging platforms (Telegram, Slack, Discord, WebChat) to AI models. The PostHog plugin is bundled with OpenClaw and captures LLM generations, tool executions, and conversation traces as $ai_generation, $ai_span, and $ai_trace events.
Prerequisites
You need:
- A running OpenClaw gateway (Node.js >= 22)
- A PostHog account with a project API key
Enable the PostHog plugin
Add the PostHog plugin to your OpenClaw config file (~/.openclaw/openclaw.json or openclaw.yaml):
You can find your project API key and host in your PostHog project settings.
Note:
diagnostics.enabledmust betruefor trace-level events ($ai_trace) to be captured. Generation and span events work without it.
Start the gateway
Start (or restart) the OpenClaw gateway for the plugin to take effect:
The PostHog plugin initializes automatically on startup. Once users send messages through any connected channel (Telegram, Slack, Discord, or WebChat), LLM analytics events are captured and sent to PostHog.
Verify traces and generations
After sending a few messages through your gateway:
- Go to the LLM analytics tab in PostHog.
- You should see traces and generations appearing within a few minutes.
Configuration options
All options go under the config key inside the posthog plugin entry:
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | (required) | Your PostHog project API key |
host | string | https://us.i.posthog.com | PostHog ingestion host |
privacyMode | boolean | false | When enabled, message content (inputs/outputs) is not sent to PostHog. Token counts, latency, model info, and errors are still captured. |
traceGrouping | "message" | "session" | "message" | "message": one trace per LLM call cycle. "session": groups all generations in a conversation into one trace. |
sessionWindowMinutes | number | 60 | Minutes of inactivity before starting a new session window. Applies in both trace grouping modes. |
Trace grouping modes
"message"(default): Each agent invocation gets its own trace. Tool-use iterations within one invocation share the same trace. Best for most use cases."session": All generations within a conversation window are grouped into a single trace. A new trace starts aftersessionWindowMinutesof inactivity. Useful for chat channels (Telegram, Slack) where per-message traces fragment conversation flow.
What gets captured
The plugin captures three types of events:
$ai_generation— Every LLM call, including model, provider, token usage, cost, latency, and input/output messages (in OpenAI chat format).$ai_span— Each tool execution, including tool name, input parameters, output result, duration, and parent generation (learn more).$ai_trace— Completed message cycles with aggregated token totals and latency (learn more).