LangGraph LLM analytics installation

  1. Install the PostHog SDK

    Required

    Setting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs.

    pip install posthog
  2. Install LangGraph

    Required

    Install LangGraph and LangChain. PostHog instruments your LLM calls through LangChain-compatible callback handlers that LangGraph supports.

    pip install langgraph langchain-openai
  3. Initialize PostHog

    Required

    Initialize PostHog with your project API key and host from your project settings, then create a LangChain CallbackHandler.

    from posthog.ai.langchain import CallbackHandler
    from posthog import Posthog
    posthog = Posthog(
    "<ph_project_api_key>",
    host="https://us.i.posthog.com"
    )
    callback_handler = CallbackHandler(
    client=posthog,
    distinct_id="user_123", # optional
    trace_id="trace_456", # optional
    properties={"conversation_id": "abc123"}, # optional
    groups={"company": "company_id_in_your_db"}, # optional
    privacy_mode=False # optional
    )
    How this works

    LangGraph is built on LangChain, so it supports LangChain-compatible callback handlers. PostHog's CallbackHandler captures $ai_generation events and trace hierarchy automatically without proxying your calls.

  4. Run your graph

    Required

    Pass the callback_handler in the config when invoking your LangGraph graph. PostHog automatically captures generation events for each LLM call.

    from langgraph.prebuilt import create_react_agent
    from langchain_openai import ChatOpenAI
    from langchain_core.tools import tool
    @tool
    def get_weather(city: str) -> str:
    """Get the weather for a given city."""
    return f"It's always sunny in {city}!"
    model = ChatOpenAI(api_key="your_openai_api_key")
    agent = create_react_agent(model, tools=[get_weather])
    result = agent.invoke(
    {"messages": [{"role": "user", "content": "What's the weather in Paris?"}]},
    config={"callbacks": [callback_handler]}
    )
    print(result["messages"][-1].content)

    PostHog automatically captures $ai_generation events and creates a trace hierarchy based on how LangGraph components are nested. You can expect captured events to have the following properties:

    PropertyDescription
    $ai_modelThe specific model, like gpt-5-mini or claude-4-sonnet
    $ai_latencyThe latency of the LLM call in seconds
    $ai_time_to_first_tokenTime to first token in seconds (streaming only)
    $ai_toolsTools and functions available to the LLM
    $ai_inputList of messages sent to the LLM
    $ai_input_tokensThe number of tokens in the input (often found in response.usage)
    $ai_output_choicesList of response choices from the LLM
    $ai_output_tokensThe number of tokens in the output (often found in response.usage)
    $ai_total_cost_usdThe total cost in USD (input + output)
    [...]See full list of properties
  5. Verify traces and generations

    Recommended
    Confirm LLM events are being sent to PostHog

    Let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog
  6. Next steps

    Recommended

    Now that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.

    ResourceDescription
    BasicsLearn the basics of how LLM calls become events in PostHog.
    GenerationsRead about the $ai_generation event and its properties.
    TracesExplore the trace hierarchy and how to use it to debug LLM calls.
    SpansReview spans and their role in representing individual operations.
    Anaylze LLM performanceLearn how to create dashboards to analyze LLM performance.

Community questions

Was this page useful?

Questions about this page? or post a community question.