Vercel AI LLM analytics installation

Last updated:

|Edit this page|

LLM analytics is currently considered in beta. To access it, enable the feature preview in your PostHog account.

  1. Install the PostHog SDK

    Required

    Setting up analytics starts with installing the PostHog SDK.

    Terminal
    npm install @posthog/ai posthog-node
  2. Install the Vercel AI SDK

    Required

    Install the Vercel AI SDK:

    Terminal
    npm install ai @ai-sdk/openai
    Proxy note

    These SDKs do not proxy your calls, they only fire off an async call to PostHog in the background to send the data.

    You can also use LLM analytics with other SDKs or our API, but you will need to capture the data manually via the capture method. See schema in the manual capture section for more details.

  3. Initialize PostHog and Vercel AI

    Required

    In the spot where you initialize the Vercel AI SDK, import PostHog and our withTracing wrapper, initialize PostHog with your project API key and host from your project settings, and pass it to the withTracing wrapper.

    TypeScript
    import { PostHog } from "posthog-node";
    import { withTracing } from "@posthog/ai"
    import { generateText } from "ai"
    import { createOpenAI } from "@ai-sdk/openai"
    const phClient = new PostHog(
    '<ph_project_api_key>',
    { host: 'https://us.i.posthog.com' }
    );
    const openaiClient = createOpenAI({
    apiKey: 'your_openai_api_key',
    compatibility: 'strict'
    });
    const model = withTracing(openaiClient("gpt-4-turbo"), phClient, {
    posthogDistinctId: "user_123", // optional
    posthogTraceId: "trace_123", // optional
    posthogProperties: { "conversation_id": "abc123", "paid": true }, // optional
    posthogPrivacyMode: false, // optional
    posthogGroups: { "company": "company_id_in_your_db" }, // optional
    });
    phClient.shutdown()
  4. Call Vercel AI

    Required

    Now, when you use the Vercel AI SDK, it automatically captures many properties into PostHog including $ai_input, $ai_input_tokens, $ai_latency, $ai_model, $ai_model_parameters, $ai_output_choices, and $ai_output_tokens. This works for both text and image message types.

    You can also capture or modify additional properties with the posthogDistinctId, posthogTraceId, posthogProperties, posthogGroups, and posthogPrivacyMode parameters.

    TypeScript
    const { text } = await generateText({
    model: model,
    prompt: message
    });
    console.log(text)

    Note: If you want to capture LLM events anonymously, don't pass a distinct ID to the request. See our docs on anonymous vs identified events to learn more.

  5. Verify traces and generations

    Checkpoint
    Confirm LLM events are being sent to PostHog

    Before proceeding, let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


    LLM generations in PostHog
    Check for LLM events in PostHog

Questions? Ask Max AI.

It's easier than reading through 728 pages of documentation

Community questions

Was this page useful?

Next article

Privacy mode

To avoid storing potentially sensitive prompt and completion data, you can enable privacy mode. This excludes the $ai_input and $ai_output_choices properties from being captured. SDK config This can be done by setting the privacy_mode config option in the SDK like this: Request parameter It can also be set at the request level by setting the privacy_mode parameter to True in the request. The exact setup depends on the LLM platform you're using:

Read next article