Link error tracking

Last updated:

|

Connect your LLM events to error tracking to debug failures and monitor exceptions in your AI workflows. This integration helps you correlate errors with specific LLM traces and understand what went wrong in your AI features.

Linking LLM events to error tracking enables you to:

  • Navigate between products: Click from an error in Error Tracking to view the full LLM trace that caused it
  • Debug faster: See the exact prompts, model responses, and metadata associated with failed LLM operations
  • Monitor reliability: Track error rates for specific LLM models, prompt versions, or user segments
  • Set up alerts: Create alerts for when LLM-related errors exceed thresholds
  • Analyze patterns: Identify common failure modes in your AI features

When an LLM operation fails or encounters an error, you can link the exception to the LLM trace using the $ai_trace_id property:

app.post('/api/chat', async (req, res) => {
const { message } = req.body
const traceId = generateTraceId() // Your trace ID generation logic
try {
const response = await openai.responses.create({
model: 'gpt-5',
messages: [{ role: 'user', content: message }],
posthogDistinctId: req.userId,
posthogTraceId: traceId, // Sets the trace ID for this LLM call
posthogProperties: {
endpoint: '/api/chat'
}
})
res.json({ response: response.choices[0].message.content })
} catch (error) {
// Capture the exception with the same trace ID
posthog.captureException(error, {
$ai_trace_id: traceId, // Links exception to the LLM trace
endpoint: '/api/chat',
user_id: req.userId,
llm_model: 'gpt-5',
error_type: 'llm_api_error'
})
res.status(500).json({ error: 'Failed to generate response' })
}
})

Tracking validation and processing errors

You can also track errors that occur during prompt validation, response processing, or any other part of your LLM pipeline:

async function processLLMResponse(response, traceId) {
try {
// Validate response structure
if (!response.choices?.[0]?.message?.content) {
throw new Error('Invalid response structure from LLM')
}
// Process the response
const processedContent = await parseAndValidate(response.choices[0].message.content)
return processedContent
} catch (error) {
// Capture processing errors with context
posthog.captureException(error, {
$ai_trace_id: traceId,
stage: 'response_processing',
error_details: error.message
})
throw error
}
}

Viewing linked traces

Once you've set up error linking, you can navigate from errors to their corresponding LLM traces:

  1. In the error tracking dashboard, find the error you want to investigate
  2. Click the LLM trace button to jump directly to the full trace
  3. View the complete context including prompts, responses, and metadata that led to the error

This linking helps you quickly understand what went wrong in your AI features by providing the full context around any error.

Questions? Ask Max AI.

It's easier than reading through 799 pages of documentation

Community questions

Was this page useful?

Next article

Manual capture

If you're using a different SDK or the API, you can manually capture the data by calling the capture method or using the capture API .

Read next article