OpenClaw LLM analytics installation

OpenClaw is a self-hosted AI assistant gateway that connects messaging platforms (Telegram, Slack, Discord, WebChat) to AI models. The PostHog plugin is bundled with OpenClaw and captures LLM generations, tool executions, and conversation traces as $ai_generation, $ai_span, and $ai_trace events.

Prerequisites

You need:

Enable the PostHog plugin

Add the PostHog plugin to your OpenClaw config file (~/.openclaw/openclaw.json or openclaw.yaml):

JSON
{
"plugins": {
"entries": {
"posthog": {
"enabled": true,
"config": {
"apiKey": "<ph_project_api_key>",
"host": "https://us.i.posthog.com"
}
}
}
},
"diagnostics": {
"enabled": true
}
}

You can find your project API key and host in your PostHog project settings.

Note: diagnostics.enabled must be true for trace-level events ($ai_trace) to be captured. Generation and span events work without it.

Start the gateway

Start (or restart) the OpenClaw gateway for the plugin to take effect:

Terminal
node openclaw.mjs gateway

The PostHog plugin initializes automatically on startup. Once users send messages through any connected channel (Telegram, Slack, Discord, or WebChat), LLM analytics events are captured and sent to PostHog.

Verify traces and generations

After sending a few messages through your gateway:

  1. Go to the LLM analytics tab in PostHog.
  2. You should see traces and generations appearing within a few minutes.

Configuration options

All options go under the config key inside the posthog plugin entry:

OptionTypeDefaultDescription
apiKeystring(required)Your PostHog project API key
hoststringhttps://us.i.posthog.comPostHog ingestion host
privacyModebooleanfalseWhen enabled, message content (inputs/outputs) is not sent to PostHog. Token counts, latency, model info, and errors are still captured.
traceGrouping"message" | "session""message""message": one trace per LLM call cycle. "session": groups all generations in a conversation into one trace.
sessionWindowMinutesnumber60Minutes of inactivity before starting a new session window. Applies in both trace grouping modes.

Trace grouping modes

  • "message" (default): Each agent invocation gets its own trace. Tool-use iterations within one invocation share the same trace. Best for most use cases.
  • "session": All generations within a conversation window are grouped into a single trace. A new trace starts after sessionWindowMinutes of inactivity. Useful for chat channels (Telegram, Slack) where per-message traces fragment conversation flow.

What gets captured

The plugin captures three types of events:

  • $ai_generation — Every LLM call, including model, provider, token usage, cost, latency, and input/output messages (in OpenAI chat format).
  • $ai_span — Each tool execution, including tool name, input parameters, output result, duration, and parent generation (learn more).
  • $ai_trace — Completed message cycles with aggregated token totals and latency (learn more).

Community questions

Was this page useful?

Questions about this page? or post a community question.