Resize disk

Last updated:

|Edit this page

On this page

🌇 Sunset Kubernetes deployments

This page covers our PostHog Kubernetes deployment, which we sunset and no longer support. We will continue to provide security updates for Kubernetes deployments until at least May 31, 2024.

For existing customers
We highly recommend migrating to PostHog Cloud (US or EU). Take a look at this guide for more information on the migration process.
Looking to continue self-hosting?
We still maintain our Open-source Docker Compose deployment. Instructions for deploying can be found here.


You need to run a Kubernetes cluster with the Volume Expansion feature enabled. This feature is supported on the majority of volume types since Kubernetes version >= 1.11 (see docs).


PersistentVolumes can be configured to be expandable. This feature when set to true, allows the users to resize the volume by editing the corresponding PersistentVolumeClaims object.

This can become useful in case your storage usage grows and you want to resize the disk on-the-fly without having to resync data across PVCs.

To verify if your storage class allows volume expansion you can run:

kubectl get storageclass -o json | jq '.items[].allowVolumeExpansion'

In case it returns false, you can enable volume expansion capabilities for your storage class by running:

DEFAULT_STORAGE_CLASS=$(kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}')
kubectl patch storageclass "$DEFAULT_STORAGE_CLASS" -p '{"allowVolumeExpansion": true}' patched


  • expanding a persistent volume is a time consuming operation
  • some platforms have a per-volume quota of one modification every 6 hours
  • not all the volume types support this feature. Please take a look at the official docs for more info


  1. Connect to the Postgresql container to verify the data directory filesystem size (in this example 10GB)

    kubectl -n posthog exec -it posthog-posthog-postgresql-0 -- /bin/bash
    I have no name!@posthog-posthog-postgresql-0:/$ df -h /bitnami/postgresql
    Filesystem Size Used Avail Use% Mounted on
    /dev/disk/by-id/scsi-0DO_Volume_pvc-966716a8-cac6-407a-afb4-8cab52b0ad9b 9.8G 145M 9.2G 2% /bitnami/postgresql
  2. Resize the underlying PVC (in this example we are resizing it to 20G)

    kubectl -n posthog patch pvc data-posthog-posthog-postgresql-0 -p '{ "spec": { "resources": { "requests": { "storage": "20Gi" }}}}'
    persistentvolumeclaim/data-posthog-posthog-postgresql-0 patched

    Note: while resizing the PVC you might get an error disk resize is only supported on Unattached disk, current disk state: Attached (see below for more details).

    In this specific case you need to temporary scale down the StatefulSet replica value to zero. This will briefly disrupt the Postgresql service availability and make the PostHog UI inaccessible. On newer versions of PostHog events will be queued and ingestion won't be impacted

    You can do that by running: kubectl -n posthog patch statefulset posthog-posthog-postgresql -p '{ "spec": { "replicas": 0 }}'

    After you successfully resized the PVC, you can restore the initial replica definition with: kubectl -n posthog patch statefulset posthog-posthog-postgresql -p '{ "spec": { "replicas": 1 }}'

  3. Delete the StatefulSet definition but leave its pods online (this is to avoid an impact to using PostHog): kubectl -n posthog delete sts --cascade=orphan posthog-posthog-postgresql

  4. In your Helm chart configuration, update the postgresql.persistence value in value.yaml to the target size (20G in this example)

  5. Run a helm upgrade to recycle all the pods and re-deploy the StatefulSet definition

  6. Connect to the Postgresql container to verify the new filesystem size

    kubectl -n posthog exec -it posthog-posthog-postgresql-0 -- /bin/bash
    I have no name!@posthog-posthog-postgresql-0:/$ df -h /bitnami/postgresql
    Filesystem Size Used Avail Use% Mounted on
    /dev/disk/by-id/scsi-0DO_Volume_pvc-966716a8-cac6-407a-afb4-8cab52b0ad9b 20G 153M 19G 1% /bitnami/postgresql


Was this page useful?

Next article

Debugging long-running migrations

When trying to upgrade a self-hosted Postgres-backed PostHog instance, we have seen some users have issues with long-running migrations on tables in the hot path. For example, when trying to migrate the persons table, some users have reported that a lock is never acquired, making the migration hang. This usually happens because some analytics queries are taking too long to complete. To get rid of these queries and run the migration, you can use the script below on the node where the PostHog…

Read next article