# Convex LLM analytics installation - Docs

1.  1

    ## Install dependencies

    Required

    Install the PostHog AI package, the Vercel AI SDK, and the OpenTelemetry SDK.

    ```bash
    npm install @posthog/ai @ai-sdk/openai ai @opentelemetry/sdk-trace-base @opentelemetry/resources @opentelemetry/api
    ```

2.  2

    ## Set environment variables

    Required

    Set your PostHog project API key and host as Convex environment variables. You can find these in your [project settings](https://app.posthog.com/settings/project).

    ```bash
    npx convex env set POSTHOG_API_KEY "<ph_project_token>"
    npx convex env set POSTHOG_HOST "https://us.i.posthog.com"
    ```

    You also need your AI provider's API key (e.g. `OPENAI_API_KEY`):

    ```bash
    npx convex env set OPENAI_API_KEY "your_openai_api_key"
    ```

3.  3

    ## Capture LLM events with OpenTelemetry

    Required

    Create a Convex action that initializes a `BasicTracerProvider` with PostHog's trace exporter and enables telemetry on your AI SDK calls. The provider is initialized at module scope so it persists across warm V8 isolate invocations.

    ```typescript
    import { trace } from '@opentelemetry/api'
    import { BasicTracerProvider, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base'
    import { resourceFromAttributes } from '@opentelemetry/resources'
    import { generateText } from 'ai'
    import { openai } from '@ai-sdk/openai'
    import { PostHogTraceExporter } from '@posthog/ai/otel'
    import { action } from './_generated/server'
    import { v } from 'convex/values'
    const provider = new BasicTracerProvider({
      resource: resourceFromAttributes({
        'service.name': 'my-convex-app',
      }),
      spanProcessors: [
        new SimpleSpanProcessor(
          new PostHogTraceExporter({
            apiKey: process.env.POSTHOG_API_KEY!,
            host: process.env.POSTHOG_HOST,
          })
        ),
      ],
    })
    trace.setGlobalTracerProvider(provider)
    export const generate = action({
      args: {
        prompt: v.string(),
        distinctId: v.optional(v.string()),
      },
      handler: async (_ctx, args) => {
        const distinctId = args.distinctId ?? 'anonymous'
        const result = await generateText({
          model: openai('gpt-5-mini'),
          prompt: args.prompt,
          experimental_telemetry: {
            isEnabled: true,
            functionId: 'my-convex-action',
            metadata: {
              posthog_distinct_id: distinctId,
            },
          },
        })
        return { text: result.text, usage: result.usage }
      },
    })
    ```

    **How this works**

    The `PostHogTraceExporter` sends OpenTelemetry `gen_ai.*` spans to PostHog's OTLP ingestion endpoint. PostHog converts these into `$ai_generation` events automatically. The `posthog_distinct_id` metadata field links events to a specific user.

4.  4

    ## Using Convex Agent

    Optional

    If you're using `@convex-dev/agent`, pass `experimental_telemetry` to the agent's `generateText` call:

    ```typescript
    import { trace } from '@opentelemetry/api'
    import { BasicTracerProvider, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base'
    import { resourceFromAttributes } from '@opentelemetry/resources'
    import { Agent } from '@convex-dev/agent'
    import { openai } from '@ai-sdk/openai'
    import { PostHogTraceExporter } from '@posthog/ai/otel'
    import { components } from './_generated/api'
    import { action } from './_generated/server'
    import { v } from 'convex/values'
    const provider = new BasicTracerProvider({
      resource: resourceFromAttributes({
        'service.name': 'my-convex-app',
      }),
      spanProcessors: [
        new SimpleSpanProcessor(
          new PostHogTraceExporter({
            apiKey: process.env.POSTHOG_API_KEY!,
            host: process.env.POSTHOG_HOST,
          })
        ),
      ],
    })
    trace.setGlobalTracerProvider(provider)
    export const generate = action({
      args: {
        prompt: v.string(),
        distinctId: v.optional(v.string()),
      },
      handler: async (ctx, args) => {
        const distinctId = args.distinctId ?? 'anonymous'
        const supportAgent = new Agent(components.agent, {
          name: 'support-agent',
          languageModel: openai('gpt-5-mini'),
          instructions: 'You are a helpful support agent.',
        })
        const { thread } = await supportAgent.createThread(ctx, {})
        const result = await thread.generateText({
          prompt: args.prompt,
          experimental_telemetry: {
            isEnabled: true,
            functionId: 'convex-agent',
            metadata: {
              posthog_distinct_id: distinctId,
            },
          },
        })
        return { text: result.text, usage: result.totalUsage }
      },
    })
    ```

    You can expect captured `$ai_generation` events to have the following properties:

    | Property | Description |
    | --- | --- |
    | $ai_model | The specific model, like gpt-5-mini or claude-4-sonnet |
    | $ai_latency | The latency of the LLM call in seconds |
    | $ai_time_to_first_token | Time to first token in seconds (streaming only) |
    | $ai_tools | Tools and functions available to the LLM |
    | $ai_input | List of messages sent to the LLM |
    | $ai_input_tokens | The number of tokens in the input (often found in response.usage) |
    | $ai_output_choices | List of response choices from the LLM |
    | $ai_output_tokens | The number of tokens in the output (often found in response.usage) |
    | $ai_total_cost_usd | The total cost in USD (input + output) |
    | [[...]](/docs/llm-analytics/generations.md#event-properties) | See [full list](/docs/llm-analytics/generations.md#event-properties) of properties |

5.  ## Verify traces and generations

    Recommended

    *Confirm LLM events are being sent to PostHog*

    Let's make sure LLM events are being captured and sent to PostHog. Under **LLM analytics**, you should see rows of data appear in the **Traces** and **Generations** tabs.

    ![LLM generations in PostHog](https://res.cloudinary.com/dmukukwp6/image/upload/SCR_20250807_syne_ecd0801880.png)![LLM generations in PostHog](https://res.cloudinary.com/dmukukwp6/image/upload/SCR_20250807_syjm_5baab36590.png)

    [Check for LLM events in PostHog](https://app.posthog.com/llm-analytics/generations)

6.  5

    ## Next steps

    Recommended

    Now that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.

    | Resource | Description |
    | --- | --- |
    | [Basics](/docs/llm-analytics/basics.md) | Learn the basics of how LLM calls become events in PostHog. |
    | [Generations](/docs/llm-analytics/generations.md) | Read about the $ai_generation event and its properties. |
    | [Traces](/docs/llm-analytics/traces.md) | Explore the trace hierarchy and how to use it to debug LLM calls. |
    | [Spans](/docs/llm-analytics/spans.md) | Review spans and their role in representing individual operations. |
    | [Anaylze LLM performance](/docs/llm-analytics/dashboard.md) | Learn how to create dashboards to analyze LLM performance. |

### Community questions

Ask a question

### Was this page useful?

HelpfulCould be better