Calculating LLM costs
Contents
How are LLM costs calculated?
PostHog calculates cost based on the number of input (prompt) and output (completion) tokens generated by specific AI models.
To determine the pricing for a model, we use a two-step matching process:
Primary matching: We use both
$ai_provider
and$ai_model
properties from your events to find the exact pricing for a model at a specific provider. This allows us to account for price variations across different providers for the same model.Fallback matching: If we can't find pricing data for a specific provider-model combination, we fall back to OpenRouter's pricing data. OpenRouter provides general pricing information for models without provider-specific breakdowns, which we use as a default when exact provider pricing isn't available.
For cached LLM responses, our pricing models include cached token pricing which we automatically apply.
We also take into account the reasoning / thinking tokens for models that support it.
Setting custom pricing
You can override PostHog's automatic cost calculation by providing custom pricing for your LLM models. This is useful when:
- You have negotiated custom pricing with your LLM provider
- You're using a model that PostHog doesn't support yet
- PostHog's automatic pricing doesn't match your specific use case
Option 1: Custom price per token
If you know your pricing per token, you can set the following custom properties when calling your LLM:
$ai_input_token_price
(required): Price per input/prompt token$ai_output_token_price
(required): Price per output/completion token$ai_cache_read_token_price
(optional): Price per cached token read$ai_cache_write_token_price
(optional): Price per cached token write
Important: These prices should be per individual token, not per million tokens. For example, if your provider charges $0.03 per 1M tokens, you would set $ai_input_token_price: 0.00000003
(0.03 / 1,000,000).
Both $ai_input_token_price
and $ai_output_token_price
must be provided for custom pricing to take effect. PostHog will then calculate the total cost based on the token counts and your custom prices.
Option 2: Pre-calculated costs
If you're manually capturing LLM events and have already calculated the total costs yourself, you can send them directly:
$ai_input_cost_usd
: Total cost for input/prompt tokens in USD$ai_output_cost_usd
: Total cost for output/completion tokens in USD
PostHog will use these values directly without any additional calculation.
Precedence
Custom pricing follows this precedence order:
- Pre-calculated costs (
$ai_input_cost_usd
and$ai_output_cost_usd
): These values are used directly - Custom price per token (
$ai_input_token_price
and$ai_output_token_price
): PostHog calculates costs from token counts - Automatic matching: PostHog uses the standard primary and fallback matching process
You can find the code for this on GitHub.