Anthropic LLM analytics installation

Last updated:

|Edit this page|

LLM analytics is currently considered in beta. To access it, enable the feature preview in your PostHog account.

  1. Install the PostHog SDK

    Required

    Setting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs.

    pip install posthog
  2. Install the Anthropic SDK

    Required

    Install the Anthropic SDK:

    pip install anthropic
    Proxy note

    These SDKs do not proxy your calls, they only fire off an async call to PostHog in the background to send the data.

    You can also use LLM analytics with other SDKs or our API, but you will need to capture the data manually via the capture method. See schema in the manual capture section for more details.

  3. Initialize PostHog and the Anthropic wrapper

    Required

    In the spot where you initialize the Anthropic SDK, import PostHog and our Anthropic wrapper, initialize PostHog with your project API key and host from your project settings, and pass it to our Anthropic wrapper.

    from posthog.ai.anthropic import Anthropic
    from posthog import Posthog
    posthog = Posthog(
    "<ph_project_api_key>",
    host="https://us.i.posthog.com"
    )
    client = Anthropic(
    api_key="sk-ant-api...", # Replace with your Anthropic API key
    posthog_client=posthog # This is an optional parameter. If it is not provided, a default client will be used.
    )

    Note: This also works with the AsyncAnthropic client as well as AnthropicBedrock, AnthropicVertex, and the async versions of those.

  4. Call Anthropic LLMs

    Required

    Now, when you use the Anthropic SDK, it automatically captures many properties into PostHog including $ai_input, $ai_input_tokens, $ai_cache_read_input_tokens, $ai_cache_creation_input_tokens, $ai_latency, $ai_tools, $ai_model, $ai_model_parameters, $ai_output_choices, and $ai_output_tokens.

    You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.

    response = client.messages.create(
    model="claude-3-opus-20240229",
    messages=[
    {
    "role": "user",
    "content": "Tell me a fun fact about hedgehogs"
    }
    ],
    posthog_distinct_id="user_123", # optional
    posthog_trace_id="trace_123", # optional
    posthog_properties={"conversation_id": "abc123", "paid": True}, # optional
    posthog_groups={"company": "company_id_in_your_db"}, # optional
    posthog_privacy_mode=False # optional
    )
    print(response.content[0].text)

    Notes:

    • This also works when message streams are used (e.g. stream=True or client.messages.stream(...)).
    • If you want to capture LLM events anonymously, don't pass a distinct ID to the request. See our docs on anonymous vs identified events to learn more.
  5. Verify traces and generations

    Checkpoint
    Confirm LLM events are being sent to PostHog

    Before proceeding, let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Google LLM analytics installation

LLM analytics is currently considered in beta . To access it, enable the feature preview in your PostHog account. Setting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs. Install the Google Gen AI SDK: In the spot where you initialize the Google Gen AI SDK, import PostHog and our Google Gen AI wrapper, initialize PostHog with your project API key and host from your project settings , and pass it to our wrapper. Now…

Read next article