- 1
Install the PostHog SDK
RequiredSetting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs.
pip install posthog - 2
Install the OpenAI SDK
RequiredInstall the OpenAI SDK:
pip install openai - 3
Initialize PostHog and OpenAI client
RequiredWe call OpenRouter through the OpenAI client and generate a response. We’ll use PostHog’s OpenAI provider to capture all the details of the call.
Initialize PostHog with your PostHog project API key and host from your project settings, then pass the PostHog client along with the OpenRouter config (the base URL and API key) to our OpenAI wrapper.
from posthog.ai.openai import OpenAIfrom posthog import Posthogposthog = Posthog("<ph_project_api_key>",host="https://us.i.posthog.com")client = OpenAI(baseURL="https://openrouter.ai/api/v1",api_key="<openrouter_api_key>",posthog_client=posthog # This is an optional parameter. If it is not provided, a default client will be used.)Note: This also works with the
AsyncOpenAI
client.Proxy noteThese SDKs do not proxy your calls. They only fire off an async call to PostHog in the background to send the data.
You can also use LLM analytics with other SDKs or our API, but you will need to capture the data in the right format. See the schema in the manual capture section for more details.
- 4
Call OpenRouter
RequiredNow, when you call OpenRouter with the OpenAI SDK, PostHog automatically captures an
$ai_generation
event.You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.
response = client.responses.create(model="gpt-5-mini",input=[{"role": "user", "content": "Tell me a fun fact about hedgehogs"}],posthog_distinct_id="user_123", # optionalposthog_trace_id="trace_123", # optionalposthog_properties={"conversation_id": "abc123", "paid": True}, # optionalposthog_groups={"company": "company_id_in_your_db"}, # optionalposthog_privacy_mode=False # optional)print(response.choices[0].message.content)Notes:
- We also support the old
chat.completions
API. - This works with responses where
stream=True
. - If you want to capture LLM events anonymously, don't pass a distinct ID to the request. See our docs on anonymous vs identified events to learn more.
You can expect captured
$ai_generation
events to have the following properties:Property Description $ai_model
The specific model, like gpt-5-mini
orclaude-4-sonnet
$ai_latency
The latency of the LLM call in seconds $ai_tools
Tools and functions available to the LLM $ai_input
List of messages sent to the LLM $ai_input_tokens
The number of tokens in the input (often found in response.usage) $ai_output_choices
List of response choices from the LLM $ai_output_tokens
The number of tokens in the output (often found in response.usage
)$ai_total_cost_usd
The total cost in USD (input + output) ... See full list of properties - We also support the old
OpenRouter LLM analytics installation
Last updated:
|Questions? Ask Max AI.
It's easier than reading through 799 pages of documentation
Community questions
Was this page useful?
Next article
Privacy mode
To avoid storing potentially sensitive prompt and completion data, you can enable privacy mode. This excludes the $ai_input and $ai_output_choices properties from being captured. SDK config This can be done by setting the privacy_mode config option in the SDK like this: Request parameter It can also be set at the request level by setting the privacy_mode parameter to True in the request. The exact setup depends on the LLM platform you're using: