Connect your LLM events to error tracking to debug failures and monitor exceptions in your AI workflows. This integration helps you correlate errors with specific LLM traces and understand what went wrong in your AI features.
Why link to error tracking?
Linking LLM events to error tracking enables you to:
- Navigate between products: Click from an error in Error Tracking to view the full LLM trace that caused it
- Debug faster: See the exact prompts, model responses, and metadata associated with failed LLM operations
- Monitor reliability: Track error rates for specific LLM models, prompt versions, or user segments
- Set up alerts: Create alerts for when LLM-related errors exceed thresholds
- Analyze patterns: Identify common failure modes in your AI features
Capturing LLM-related exceptions
When an LLM operation fails or encounters an error, you can link the exception to the LLM trace using the $ai_trace_id
property:
app.post('/api/chat', async (req, res) => {const { message } = req.bodyconst traceId = generateTraceId() // Your trace ID generation logictry {const response = await openai.responses.create({model: 'gpt-5',messages: [{ role: 'user', content: message }],posthogDistinctId: req.userId,posthogTraceId: traceId, // Sets the trace ID for this LLM callposthogProperties: {endpoint: '/api/chat'}})res.json({ response: response.choices[0].message.content })} catch (error) {// Capture the exception with the same trace IDposthog.captureException(error, {$ai_trace_id: traceId, // Links exception to the LLM traceendpoint: '/api/chat',user_id: req.userId,llm_model: 'gpt-5',error_type: 'llm_api_error'})res.status(500).json({ error: 'Failed to generate response' })}})
Tracking validation and processing errors
You can also track errors that occur during prompt validation, response processing, or any other part of your LLM pipeline:
async function processLLMResponse(response, traceId) {try {// Validate response structureif (!response.choices?.[0]?.message?.content) {throw new Error('Invalid response structure from LLM')}// Process the responseconst processedContent = await parseAndValidate(response.choices[0].message.content)return processedContent} catch (error) {// Capture processing errors with contextposthog.captureException(error, {$ai_trace_id: traceId,stage: 'response_processing',error_details: error.message})throw error}}
Viewing linked traces
Once you've set up error linking, you can navigate from errors to their corresponding LLM traces:
- In the error tracking dashboard, find the error you want to investigate
- Click the LLM trace button to jump directly to the full trace
- View the complete context including prompts, responses, and metadata that led to the error
This linking helps you quickly understand what went wrong in your AI features by providing the full context around any error.