To avoid storing potentially sensitive prompt and completion data, you can enable privacy mode. This excludes the $ai_input
and $ai_output_choices
properties from being captured.
SDK config
This can be done by setting the privacy_mode
config option in the SDK like this:
from posthog import Posthogposthog = Posthog("<ph_project_api_key>",host="https://us.i.posthog.com",privacy_mode=True)
Request parameter
It can also be set at the request level by setting the privacy_mode
parameter to True
in the request. The exact setup depends on the LLM platform you're using:
client.responses.create(model="gpt-4o-mini",input=[...],posthog_privacy_mode=True)