LangChain LLM analytics installation

Last updated:

|Edit this page|

LLM analytics is currently considered in beta. To access it, enable the feature preview in your PostHog account.

  1. Install the PostHog SDK

    Required

    Setting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs.

    pip install posthog
  2. Install LangChain and OpenAI SDKs

    Required

    Install the LangChain and OpenAI Python SDKs:

    pip install langchain openai langchain-openai
    Proxy note

    These SDKs do not proxy your calls, they only fire off an async call to PostHog in the background to send the data.

    You can also use LLM analytics with other SDKs or our API, but you will need to capture the data manually via the capture method. See schema in the manual capture section for more details.

  3. Initialize PostHog and LangChain

    Required

    In the spot where you make your OpenAI calls, import PostHog, LangChain, and our LangChain CallbackHandler. Initialize PostHog with your project API key and host from your project settings, and pass it to the CallbackHandler.

    Optionally, you can provide a user distinct ID, trace ID, PostHog properties, groups, and privacy mode.

    from posthog.ai.langchain import CallbackHandler
    from langchain_openai import ChatOpenAI
    from langchain_core.prompts import ChatPromptTemplate
    from posthog import Posthog
    posthog = Posthog(
    "<ph_project_api_key>",
    host="https://us.i.posthog.com"
    )
    callback_handler = CallbackHandler(
    client=posthog, # This is an optional parameter. If it is not provided, a default client will be used.
    distinct_id="user_123", # optional
    trace_id="trace_456", # optional
    properties={"conversation_id": "abc123"} # optional
    groups={"company": "company_id_in_your_db"} # optional
    privacy_mode=False # optional
    )

    Note: If you want to capture LLM events anonymously, don't pass a distinct ID to the CallbackHandler. See our docs on anonymous vs identified events to learn more.

  4. Call LangChain

    Required

    When you invoke your chain, pass the callback_handler in the config as part of your callbacks:

    prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
    ])
    model = ChatOpenAI(openai_api_key="your_openai_api_key")
    chain = prompt | model
    # Execute the chain with the callback handler
    response = chain.invoke(
    {"input": "Tell me a joke about programming"},
    config={"callbacks": [callback_handler]}
    )
    print(response.content)

    This automatically captures many properties into PostHog including $ai_input, $ai_input_tokens, $ai_latency, $ai_model, $ai_model_parameters, $ai_output_choices, and $ai_output_tokens. It also automatically creates a trace hierarchy based on how LangChain components are nested.

  5. Verify traces and generations

    Checkpoint
    Confirm LLM events are being sent to PostHog

    Before proceeding, let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Vercel AI LLM analytics installation

LLM analytics is currently considered in beta . To access it, enable the feature preview in your PostHog account. Setting up analytics starts with installing the PostHog SDK. Install the Vercel AI SDK: In the spot where you initialize the Vercel AI SDK, import PostHog and our withTracing wrapper, initialize PostHog with your project API key and host from your project settings , and pass it to the withTracing wrapper. Now, when you use the Vercel AI SDK, it automatically captures many…

Read next article