Set up log alerts

Log alerts are coming soon

Log alerts aren't available yet. The documentation below describes how alerts will work once released.

Log alerts notify you when the volume of logs matching specific filters crosses a threshold. Use them to catch spikes in errors, drops in expected traffic, or unusual patterns across your services.

Alerts are checked every 5 minutes and can be configured with noise-reduction settings to avoid false positives from brief, one-off spikes.

Create an alert

You can manage alerts from the Alerts tab in the Logs section, or from Settings > Environment > Logs > Alerting.

  1. Click New alert.
  2. Give your alert a descriptive name (e.g. "API 5xx errors").
  3. Configure your filters, threshold rules, and notification destinations. Optionally adjust advanced options to reduce noise.
  4. (Optional) Use the Simulate panel to preview which logs would have matched your filters and threshold against recent historical data before saving.
  5. Click Create alert.

Configure filters

Every alert requires at least one filter — saving without one returns the error "At least one filter is required". Filters determine which logs are counted against your threshold. You can combine multiple filter types.

Severity

Filter by log severity levels (e.g. ERROR, WARN, INFO). Select one or more levels from the dropdown.

Service

Filter by service name. Select one or more services from the dropdown to scope the alert to specific parts of your infrastructure.

Attributes

Filter by log attributes, resource attributes, or other log properties. Use the attribute search to find and add filters. These work the same way as filters in the log search view.

Set threshold rules

Threshold rules define the condition that triggers the alert. Configure:

  • Operator – whether the log count must go above or below the threshold (default: above)
  • Count – the number of matching logs that triggers the alert (default: 100)
  • Window – the time window to check, one of 5, 10, 15, 30, or 60 minutes (default: 10 minutes)

For example, "alert if count goes above 100 in the last 10 minutes" fires when more than 100 matching logs are observed in a rolling 10-minute window.

Configure notifications

An alert without a destination has nowhere to send notifications. Add at least one of:

  • Slack – pick a connected Slack workspace and channel.
  • Webhook – provide a URL to receive an HTTP POST when the alert fires.

Destinations are delivered through hog functions under the hood, so they reuse the same infrastructure as other PostHog destinations. The Notifications column in the alert list shows a struck-through bell when no destinations are configured.

Advanced options

Reduce noise

By default, an alert fires as soon as a single check breaches the threshold. To prevent notifications from brief spikes, require multiple consecutive checks to match before firing.

Configure this with two values:

  • Datapoints to alarm – how many checks must breach the threshold (default: 1)
  • Evaluation periods – the total number of recent checks to consider (default: 1, max: 10)

For example, setting "3 of 5 checks must match to fire" means the threshold must be breached in at least three of the last five check windows before the alert triggers.

The alert auto-resolves once the condition is no longer met.

Notification cooldown

After an alert fires, you can set a cooldown period (in minutes) before another notification is sent. Set to 0 to notify on every check that matches. The default is 0.

Manage alerts

The alert list shows a table of all your alerts:

ColumnDescription
NameThe alert name. Click to open the alert detail view.
StatusCurrent alert state (see below).
ThresholdSummary of the threshold rule (e.g. > 100 in 10m).
Last checkedWhen the alert was last evaluated.
Last 24hSparkline of the alert's state over the last 24 hours. Green = OK, red = firing, orange = resolving/errored, grey = snoozed or disabled.
NotificationsTags for each configured destination. A struck-through bell indicates no destinations.
Created byThe user who created the alert.
EnabledToggle to enable or disable the alert without deleting it.
Edit, view history, snooze, reset (when broken), and delete actions.

Alert states

StateDescription
OKThe alert condition is not met.
FiringThe alert condition is met and notifications are being sent.
ResolvingThe condition was previously met but is no longer breaching the threshold.
ErroredSomething went wrong evaluating the alert.
SnoozedThe alert is temporarily silenced.
BrokenThe alert has failed 5 consecutive checks and won't be evaluated again until you reset it.

Edit an alert

Open the menu and select Edit. Update any fields and click Save. Clicking the alert name opens the detail view rather than the edit modal.

Snooze an alert

Open the menu and select Snooze, then pick a duration: 30 minutes, 1 hour, 4 hours, or 24 hours. The alert won't fire until the snooze expires. To clear it early, open the menu again and choose Unsnooze.

Reset a broken alert

If an alert errors on 5 consecutive checks, it enters the Broken state and stops evaluating. Open the menu and select Reset alert to clear the failure count and resume checks.

View history

Open the menu and select View history to see every check, state transition, snooze, and reset for the alert.

Delete an alert

Open the menu and select Delete. Confirm in the dialog. This action can't be undone.

Community questions

Was this page useful?

Questions about this page? or post a community question.