MCP servers give LLMs powerful capabilities, but without analytics and error tracking you're flying blind with no visibility into usage or performance. Which tools get called? How often? Where are the bottlenecks? What's failing?
This tutorial walks you through how to add product analytics and error tracking to any MCP server using a simple wrapper pattern. This implementation tracks every tool execution without touching core business logic.
Claude Desktop or another MCP client to test your MCP server
Basic TypeScript knowledge
Code editor (e.g., VS Code, Cursor)
MCP's design and the wrapper pattern
MCP servers have an architecture that makes the wrapper pattern a natural fit for extended functionality like analytics and error tracking.
Why? MCP's functional design means wrapper patterns work seamlessly, unlike other web frameworks with middleware pipelines or class-based systems with decorators.
Here's what the boilerplate code looks like for MCP tool registration:
typescript
// This is how MCP tools are registered, already functional style
server.tool(
"toolName",
{/* schema */},
{/* metadata */},
async(args)=>{/* handler function */}
);
Since MCP tools are mostly just async functions passed to server.tool(), wrapping the handler function is a clean and lightweight way of adding or extending functionality – in this case, analytics and error tracking.
1. MCP server setup
To get us started quickly, we've built an MCP server for you to add product analytics and error tracking to. Start by cloning the repository.
Notice that these tools contain only business logic, with zero dependencies on analytics or error tracking libraries. Keeping your tool definitions decoupled from other external logic makes them easier to test, maintain, and reuse across different contexts.
3. MCP analytics provider interface
Next, let's take a look at the TypeScript interface for the MCP analytics provider in the analytics.ts file. It defines a standard set of methods for sending analytics data from your MCP server.
It has three core abilities:
Track tool calls
Capture errors
Close the analytics client
./src/analytics.ts
exportinterfaceAnalyticsProvider{
/**
* Track a successful tool execution with timing information
* @param toolName - Name of the tool that was executed
* @param result - Execution results including duration and success status
* Gracefully shut down the analytics client and flush pending events
*/
close():Promise<void>;
}
This approach makes your code testable and flexible.
Think of the interface as a generic adapter for analytics calls. Want to use a different analytics provider? Write a new implementation. Need to debug locally? Create a file-based logger. Running tests? Use a no-emit version that tracks calls without sending data.
4. withAnalytics() wrapper
In the same analytics.ts file, let's explore the core design pattern: the withAnalytics() wrapper that intercepts every tool call. The wrapper function is responsible for invoking the analytics provider methods defined in the previous step.
The withAnalytics() function:
Times every tool call execution
Tracks success/failure
Preserves normal error handling
Works without an analytics provider
./src/analytics.ts
exportasyncfunctionwithAnalytics<T>(
analytics: AnalyticsProvider |undefined,
toolName:string,
handler:()=>Promise<T>
):Promise<T>{
const start = Date.now();
try{
const result =awaithandler();
const duration_ms = Date.now()- start;
// Track successful execution
await analytics?.trackTool(toolName,{
duration_ms,
success:true
});
return result;
}catch(error){
const duration_ms = Date.now()- start;
// Track the error with context
await analytics?.trackError(error as Error,{
tool_name: toolName,
duration_ms
});
throw error;// Re-throw so MCP handles it normally
}
}
5. PostHog product analytics and error tracking
Now let's send those analytics somewhere useful. In the posthog.ts file, we initialize the PostHog client that implements the AnalyticsProvider interface and extends it with the necessary calls to capture data and send it to PostHog.
The PostHogAnalyticsProvider class leverages the PostHog Node.js SDK to capture custom events for product analytics and exceptions for built-in error tracking.
Notice how each tool handler function is wrapped with the withAnalytics() wrapper we saw earlier. Every tool call is tracked by the PostHogAnalyticsProvider class, capturing analytics, tracking errors, and sending data to PostHog.
7. Injecting the PostHogAnalyticsProvider
In the main index.tsx file, the PostHogAnalyticsProvider is injected into the MCP server on initialization.
console.error("[SERVER] Error during server startup:", err);
process.exit(1);
}
}
(async()=>{
awaitmain();
})();
8. Testing the MCP server
Now we can test our MCP server with Claude Desktop, or any compatible MCP client, to see MCP analytics in action.
Note: The client will run our server as a child process so we don't need to run our server in our terminal. We modify the Claude's config file to make sure the Claude Desktop can run our build.
Open Claude Desktop's Settings > Developer. Then select Edit Config to open the configuration file.
Update the claude_desktop_config.json file to include the following:
It should look like the following in Claude Desktop's Settings > Developer:
You can then try these prompts:
"Show me the inventory" → Should successfully return the inventory.
"Check stock for product 999" → Should throw an error.
"Analyze this data: quarterly sales" → Simulates a slow operation.
Our MCP server executing tool calls.
9. Create MCP analytics dashboards
Now that our MCP server is creating MCP analytics and sending them as events to PostHog, we can build insights and dashboards in PostHog to visualize the data. We can set up:
Performance dashboards
Reliability dashboards
Usage dashboards
MCP analytics sent to PostHog as captured events and exceptions