Semantic Kernel LLM analytics installation
Contents
- 1
Install the PostHog SDK
RequiredSetting up analytics starts with installing the PostHog SDK. The Semantic Kernel integration uses PostHog's OpenAI wrapper.
- 2
Install Semantic Kernel
RequiredInstall Semantic Kernel with OpenAI support. PostHog instruments your LLM calls by wrapping the OpenAI client that Semantic Kernel uses under the hood.
- 3
Initialize PostHog and Semantic Kernel
RequiredInitialize PostHog with your project API key and host from your project settings, then create a PostHog
AsyncOpenAIwrapper and pass it to Semantic Kernel'sOpenAIChatCompletionservice.How this worksPostHog's
AsyncOpenAIwrapper is a proper subclass ofopenai.AsyncOpenAI, so it works directly as theasync_clientparameter in Semantic Kernel'sOpenAIChatCompletion. PostHog captures$ai_generationevents automatically without proxying your calls. - 4
Run your kernel function
RequiredUse Semantic Kernel as normal. PostHog automatically captures an
$ai_generationevent for each LLM call made through the wrapped client.You can expect captured
$ai_generationevents to have the following properties:Property Description $ai_modelThe specific model, like gpt-5-miniorclaude-4-sonnet$ai_latencyThe latency of the LLM call in seconds $ai_time_to_first_tokenTime to first token in seconds (streaming only) $ai_toolsTools and functions available to the LLM $ai_inputList of messages sent to the LLM $ai_input_tokensThe number of tokens in the input (often found in response.usage) $ai_output_choicesList of response choices from the LLM $ai_output_tokensThe number of tokens in the output (often found in response.usage)$ai_total_cost_usdThe total cost in USD (input + output) [...] See full list of properties - 5
Next steps
RecommendedNow that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.
Resource Description Basics Learn the basics of how LLM calls become events in PostHog. Generations Read about the $ai_generationevent and its properties.Traces Explore the trace hierarchy and how to use it to debug LLM calls. Spans Review spans and their role in representing individual operations. Anaylze LLM performance Learn how to create dashboards to analyze LLM performance.

