# AWS Bedrock LLM analytics installation - Docs

1.  1

    ## Install dependencies

    Required

    Install the OpenTelemetry SDK, OTLP exporter, and the AWS SDK instrumentation for your language.

    PostHog AI

    ### Python

    ```bash
    pip install boto3 opentelemetry-instrumentation-botocore opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
    ```

    ### Node

    ```bash
    npm install @aws-sdk/client-bedrock-runtime @opentelemetry/instrumentation-aws-sdk @opentelemetry/sdk-node @opentelemetry/resources @posthog/ai
    ```

2.  2

    ## Set up the OpenTelemetry exporter

    Required

    Configure the OpenTelemetry SDK to export traces to PostHog's OTLP ingestion endpoint. PostHog converts `gen_ai.*` spans into `$ai_generation` events automatically.

    PostHog AI

    ### Python

    ```python
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.sdk.trace.export import BatchSpanProcessor
    from opentelemetry.sdk.resources import Resource, SERVICE_NAME
    from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
    from opentelemetry.instrumentation.botocore import BotocoreInstrumentor
    resource = Resource(attributes={
        SERVICE_NAME: "my-ai-app",
    })
    exporter = OTLPSpanExporter(
        endpoint="https://us.i.posthog.com/i/v0/ai/otel",
        headers={"Authorization": "Bearer <ph_project_token>"},
    )
    provider = TracerProvider(resource=resource)
    provider.add_span_processor(BatchSpanProcessor(exporter))
    trace.set_tracer_provider(provider)
    BotocoreInstrumentor().instrument()
    ```

    ### Node

    ```typescript
    import { NodeSDK } from '@opentelemetry/sdk-node'
    import { resourceFromAttributes } from '@opentelemetry/resources'
    import { PostHogTraceExporter } from '@posthog/ai/otel'
    import { AwsInstrumentation } from '@opentelemetry/instrumentation-aws-sdk'
    const sdk = new NodeSDK({
      resource: resourceFromAttributes({
        'service.name': 'my-ai-app',
      }),
      traceExporter: new PostHogTraceExporter({
        apiKey: '<ph_project_token>',
        host: 'https://us.i.posthog.com',
      }),
      instrumentations: [new AwsInstrumentation()],
    })
    sdk.start()
    ```

3.  3

    ## Call Bedrock

    Required

    Make Bedrock API calls as normal. The instrumentation automatically captures `gen_ai.*` spans for Converse, ConverseStream, InvokeModel, and InvokeModelWithResponseStream operations.

    PostHog AI

    ### Python

    ```python
    import boto3
    client = boto3.client("bedrock-runtime", region_name="us-east-1")
    response = client.converse(
        modelId="us.anthropic.claude-3-5-haiku-20241022-v1:0",
        messages=[
            {
                "role": "user",
                "content": [{"text": "Tell me a fun fact about hedgehogs."}],
            }
        ],
    )
    print(response["output"]["message"]["content"][0]["text"])
    ```

    ### Node

    ```typescript
    // The AWS SDK must be imported after sdk.start() so the
    // instrumentation can patch it.
    const {
      BedrockRuntimeClient,
      ConverseCommand,
    } = await import('@aws-sdk/client-bedrock-runtime')
    const client = new BedrockRuntimeClient({ region: 'us-east-1' })
    const response = await client.send(
      new ConverseCommand({
        modelId: 'us.anthropic.claude-3-5-haiku-20241022-v1:0',
        messages: [
          {
            role: 'user',
            content: [{ text: 'Tell me a fun fact about hedgehogs.' }],
          },
        ],
      })
    )
    console.log(response.output?.message?.content?.[0]?.text)
    await sdk.shutdown()
    ```

    **Supported models**

    The instrumentation emits `gen_ai.*` spans for **Amazon Titan**, **Amazon Nova**, and **Anthropic Claude** models. Tool call instrumentation is available for Amazon Nova and Anthropic Claude 3+.

    > **Note:** If you want to capture LLM events anonymously, omit the `posthog_distinct_id`. See our docs on [anonymous vs identified events](/docs/data/anonymous-vs-identified-events.md) to learn more.

    You can expect captured `$ai_generation` events to have the following properties:

    | Property | Description |
    | --- | --- |
    | $ai_model | The specific model, like gpt-5-mini or claude-4-sonnet |
    | $ai_latency | The latency of the LLM call in seconds |
    | $ai_time_to_first_token | Time to first token in seconds (streaming only) |
    | $ai_tools | Tools and functions available to the LLM |
    | $ai_input | List of messages sent to the LLM |
    | $ai_input_tokens | The number of tokens in the input (often found in response.usage) |
    | $ai_output_choices | List of response choices from the LLM |
    | $ai_output_tokens | The number of tokens in the output (often found in response.usage) |
    | $ai_total_cost_usd | The total cost in USD (input + output) |
    | [[...]](/docs/llm-analytics/generations.md#event-properties) | See [full list](/docs/llm-analytics/generations.md#event-properties) of properties |

4.  ## Verify traces and generations

    Recommended

    *Confirm LLM events are being sent to PostHog*

    Let's make sure LLM events are being captured and sent to PostHog. Under **LLM analytics**, you should see rows of data appear in the **Traces** and **Generations** tabs.

    ![LLM generations in PostHog](https://res.cloudinary.com/dmukukwp6/image/upload/SCR_20250807_syne_ecd0801880.png)![LLM generations in PostHog](https://res.cloudinary.com/dmukukwp6/image/upload/SCR_20250807_syjm_5baab36590.png)

    [Check for LLM events in PostHog](https://app.posthog.com/llm-analytics/generations)

5.  4

    ## Next steps

    Recommended

    Now that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.

    | Resource | Description |
    | --- | --- |
    | [Basics](/docs/llm-analytics/basics.md) | Learn the basics of how LLM calls become events in PostHog. |
    | [Generations](/docs/llm-analytics/generations.md) | Read about the $ai_generation event and its properties. |
    | [Traces](/docs/llm-analytics/traces.md) | Explore the trace hierarchy and how to use it to debug LLM calls. |
    | [Spans](/docs/llm-analytics/spans.md) | Review spans and their role in representing individual operations. |
    | [Anaylze LLM performance](/docs/llm-analytics/dashboard.md) | Learn how to create dashboards to analyze LLM performance. |

### Community questions

Ask a question

### Was this page useful?

HelpfulCould be better