Privacy mode

Last updated:

|Edit this page|

To avoid storing potentially sensitive prompt and completion data, you can enable privacy mode. This excludes the $ai_input and $ai_output_choices properties from being captured.

SDK config

This can be done by setting the privacy_mode config option in the SDK like this:

from posthog import Posthog
posthog = Posthog(
"<ph_project_api_key>",
host="https://us.i.posthog.com",
privacy_mode=True
)

Request parameter

It can also be set at the request level by setting the privacy_mode parameter to True in the request. The exact setup depends on the LLM platform you're using:

client.responses.create(
model="gpt-4o-mini",
input=[...],
posthog_privacy_mode=True
)

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Generations

Generations are an event that capture an LLM request. The generations tab lists them along with the properties autocaptured by PostHog like the person, model, total cost, token usage, and more. When you expand a generation, it includes the properties, metadata, a conversation history, the role (system, user, assistant), input content, and output content. Event properties

Read next article