Postgres destination for batch exports

Last updated:

|Edit this page

Batch exports can be used to export data to a Postgres table.

Setting up Postgres access

  1. Make sure PostHog can access your Postgres database.

Note: Wherever your Postgres database is hosted, make sure the host is set to accept all incoming connections so that PostHog can connect to the database and insert events. PostHog does not guarantee a static list of IP addresses to whitelist. If this is not possible in your case, consider exporting data to a different destination (like S3) and then setting up your own system for getting data into your Postgres database.

  1. Create a Postgres user with table creation privileges.

When executing a batch export, if the destination table doesn't exist, it will be created. CREATE TABLE and USAGE permissions are required for this reason. You can and should block PostHog from doing anything else on any other tables. In particular, we recommend creating a new schema and only granting PostHog CREATE TABLE and USAGE access limited to that schema:

SQL
CREATE USER posthog WITH PASSWORD 'insert-a-strong-password-here';
CREATE SCHEMA posthog_exports;
GRANT CREATE ON SCHEMA posthog_exports TO posthog;
GRANT USAGE ON SCHEMA posthog_exports TO posthog;

Event schema

This is the schema of all the fields that are exported to Postgres.

FieldTypeDescription
uuidVARCHAR(200)The unique ID of the event within PostHog
eventVARCHAR(200)The name of the event that was sent
propertiesJSONBA JSON object with all the properties sent along with an event
elementsJSONBThis field is present for backwards compatibility but has been deprecated
setJSONBA JSON object with any person properties sent with the $set field
set_onceJSONBA JSON object with any person properties sent with the $set_once field
distinct_idVARCHAR(200)The distinct_id of the user who sent the event
team_idINTEGERThe team_id for the event
ipVARCHAR(200)The IP address that was sent with the event
site_urlVARCHAR(200)This field is present for backwards compatibility but has been deprecated
timestampTIMESTAMP WITH TIME ZONEThe timestamp associated with an event

Creating the batch export

  1. Subscribe to data pipelines add-on in your billing settings if you haven't already.
  2. Click Data pipelines in the navigation and go to the exports tab in your PostHog instance.
  3. Click "Create export workflow".
  4. Select Postgres as the batch export destination.
  5. Fill in the necessary configuration details.
  6. Finalize the creation by clicking on "Create".
  7. Done! The batch export will schedule its first run on the start of the next period.

Postgres configuration

Configuring a batch export targeting Postgres requires the following Postgres-specific configuration values:

  • User: User for your Postgres database with CREATE TABLE access used by PostHog to login to your database.
  • Password: The password of the username provided.
  • Host: The host name of the server on which your Postgres database is running.
  • Port: The TCP port on which the Postgres database server is listening for connections.
  • Table name: The name of a Postgres table where to export the data.
  • Database: The name of the Postgres database where the table provided to insert data is located.
  • Schema: The name of the Postgres database schema where the table provided to insert data is located.
  • Does your Postgres instance have a self-signed SSL certificate?: In most cases, Heroku and RDS users should check this box.

Questions?

Was this page useful?

Next article

Redshift destination for batch exports

Batch exports can be used to export data to Redshift, Amazon's data warehouse product. Creating the batch export Subscribe to data pipelines add-on in your billing settings if you haven't already. Click Data pipelines in the navigation and go to the exports tab in your PostHog instance. Click "Create export workflow". Select Redshift as the batch export type. Fill in the necessary configuration details . Finalize the creation by clicking on "Create". Done! The batch export will schedule…

Read next article