Deploying to Google Cloud Platform

Last updated:

First, we need to set up a Kubernetes cluster (see the official GCP documentation for more info).

Cluster requirements

  • Kubernetes version >=1.19 <= 1.22

  • Ensure your cluster has enough resources to run PostHog (we suggest a total minimum of 4 vcpu & 8GB of memory)

  • Suggestion: ensure allowVolumeExpansion is set to True in the storage class definition (this setting enables PVC resize)

    PersistentVolumes can be configured to be expandable. This feature when set to true, allows the users to resize the volume by editing the corresponding PersistentVolumeClaims object.

    This can become useful in case your storage usage grows and you want to resize the disk on-the-fly without having to resync data across PVCs.

    To verify if your storage class allows volume expansion you can run:

    kubectl get storageclass -o json | jq '.items[].allowVolumeExpansion'

    In case it returns false, you can enable volume expansion capabilities for your storage class by running:

    kubectl patch storageclass -p '{"allowVolumeExpansion": true}' patched


    • expanding a persistent volume is a time consuming operation
    • some platforms have a per-volume quota of one modification every 6 hours
    • not all the volume types support this feature. Please take a look at the official docs for more info
  • Suggestion: ensure reclaimPolicy is set to Retain in the storage class definition (this setting allows for manual reclamation of the resource)

    The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume (see the official documentation).

    This can become useful in case your need to reprovision a pod/statefulset but you don't want to lose the underlying data

    To verify which reclaimPolicy your default storage class is using you can run:

    kubectl get storageclass -o json | jq '.items[].reclaimPolicy'

    If your storage class allows it, you can modify the reclaimPolicy by running:

    kubectl patch storageclass -p '{"reclaimPolicy": "Retain"}' patched

Note: in order to reduce the overhead of managing stateful services like PostgreSQL, Kafka, Redis and ClickHouse by yourself, we suggest you to run them outside Kubernetes and offload their provisioning, building and maintenance operations:

Chart configuration

Here's the minimal required values.yaml that we'll be using later. You can find an overview of the parameters that can be configured during installation under configuration.

cloud: "gcp"
hostname: <your-hostname>

Installing the chart

To install the chart using Helm with the release name posthog in the posthog namespace, run the following:

helm repo add posthog
helm repo update
helm upgrade --install -f values.yaml --timeout 20m --create-namespace --namespace posthog posthog posthog/posthog --wait --wait-for-jobs --debug

If you don't see some pods or services come up (e.g. chi-posthog-posthog-0-0-0 pod or clickhouse-posthog service is missing), try running helm upgrade again.

Set up a static IP

  1. Open the Google Cloud Console
  2. Go to VPC Networks > External IP addresses
  3. Add a new global static IP with the name posthog

Setting up DNS

Create the record of your desired hostname pointing to the address found above.

After around 30 minutes (required to request, receive and deploy the TLS certificate) you should have a fully working and secure PostHog instance available at the domain record you've chosen!


I cannot connect to my PostHog instance after creation

If DNS has been updated properly, check whether the SSL certificate was created successfully.

This can be done via the following command:

gcloud beta --project yourproject compute ssl-certificates list

If running the command shows the SSL cert as PROVISIONING, that means that the certificate is still being created. Read more on how to troubleshoot Google SSL certificates here.

As a troubleshooting tool, you can allow HTTP access by setting ingress.gcp.forceHttps and web.secureCookies both to false, but we recommend always accessing PostHog via https.

Upgrading the chart

To upgrade the chart using Helm with the release name posthog in posthog namespace, do the following:

  1. Get and update the helm repo:
helm repo add posthog
helm repo update
  1. Check if it's going to be a major version upgrade:
helm list -n posthog
helm search repo posthog

Compare the numbers for the chart version (in the format posthog-{major}.{minor}.{patch} - for example, posthog-3.15.1) when running the commands above. If the upgrade is for a major version, check the upgrade notes before moving forward.

  1. Run the upgrade
helm upgrade -f values.yaml --timeout 20m --namespace posthog posthog posthog/posthog --atomic --wait --wait-for-jobs --debug

Check the Helm documentation for more info about the helm upgrade command.

TroubleshootingIf you see this error
Error: UPGRADE FAILED: release posthog failed, and has been rolled back due to atomic being set: post-upgrade hooks failed: warning: Hook post-upgrade posthog/templates/migrate.job.yaml failed: jobs.batch "posthog-migrate" already exists

it happens because the migrate job was left around from a previous upgrade attempt, we just need to kill that job (kubectl -n posthog delete job $(kubectl -n posthog get jobs --no-headers -o before running the upgrade again.

Uninstalling the chart

To uninstall the chart with the release name posthog in posthog namespace, run the following:

helm uninstall posthog --namespace posthog

See the Helm docs for documentation on the helm uninstall command.

The command above removes all the Kubernetes components associated with the chart and deletes the release. Sometimes everything doesn't get properly removed. If that happens try deleting the namespace:

kubectl delete namespace posthog

Clickhouse Configuration

By default clickhouse is provisioned with standard gcp persistent disk. If you want to specify your own persistent volume claim or switch to a different type of disk you can specify the volume claim within values.yaml.

To manually provision a disk

Using the gcloud cli tool for provisioning a disk:

gcloud compute disks create pvc-clickhouse --type=pd-ssd --size=2048GB --zone=us-central1-c

Create the claim

In order to provide the disk to the clickhouse deployment you must first create a persistent volume and claim within the posthog namespace.

# This creates a volume claim using the same name specified within the clickhouse values file
apiVersion: v1
kind: PersistentVolume
name: clickhouse-volume
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
storage: 2048Gi
- ReadWriteOnce
pdName: pvc-clickhouse
fsType: ext4
apiVersion: v1
kind: PersistentVolumeClaim
name: clickhouse-pvc
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
storageClassName: ""
volumeName: clickhouse-volume
- ReadWriteOnce
storage: 2048Gi

Provide the claim to the helm chart

Add the following to your values.yaml & run helm install or upgrade:

# -- Optional: Used to manually specify a persistent volume claim. When specified the cloud specific storage class will not be provisioned
persistentVolumeClaim: "clickhouse-pvc"

Ask a question