Resize disk

Last updated:

|Edit this page

🌇 Sunset Kubernetes deployments

This page covers our PostHog Kubernetes deployment, which we sunset and no longer support. We will continue to provide security updates for Kubernetes deployments until at least May 31, 2024.

For existing customers
We highly recommend migrating to PostHog Cloud (US or EU). Take a look at this guide for more information on the migration process.
Looking to continue self-hosting?
We still maintain our Open-source Docker Compose deployment. Instructions for deploying can be found here.


You need to run a Kubernetes cluster with the Volume Expansion feature enabled. This feature is supported on the majority of volume types since Kubernetes version >= 1.11 (see docs).


PersistentVolumes can be configured to be expandable. This feature when set to true, allows the users to resize the volume by editing the corresponding PersistentVolumeClaims object.

This can become useful in case your storage usage grows and you want to resize the disk on-the-fly without having to resync data across PVCs.

To verify if your storage class allows volume expansion you can run:

kubectl get storageclass -o json | jq '.items[].allowVolumeExpansion'

In case it returns false, you can enable volume expansion capabilities for your storage class by running:

DEFAULT_STORAGE_CLASS=$(kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}')
kubectl patch storageclass "$DEFAULT_STORAGE_CLASS" -p '{"allowVolumeExpansion": true}' patched


  • expanding a persistent volume is a time consuming operation
  • some platforms have a per-volume quota of one modification every 6 hours
  • not all the volume types support this feature. Please take a look at the official docs for more info


  1. List your pods

    kubectl get pods -n posthog
    posthog-posthog-kafka-0 1/1 Running 0 5m15s
  2. Connect to the Kafka container to verify the data directory filesystem size (in this example 15GB)

    kubectl -n posthog exec -it posthog-posthog-kafka-0 -- /bin/bash
    posthog-posthog-kafka-0:/$ df -h /bitnami/kafka
    Filesystem Size Used Avail Use% Mounted on
    /dev/disk/by-id/scsi-0DO_Volume_pvc-97776a5e-9cdc-4fac-8dad-199f1728b857 15G 40M 14G 1% /bitnami/kafka
  3. Resize the underlying PVC (in this example we are resizing it to 20G)

    kubectl -n posthog patch pvc data-posthog-posthog-kafka-0 -p '{ "spec": { "resources": { "requests": { "storage": "20Gi" }}}}'
    persistentvolumeclaim/data-posthog-posthog-kafka-0 patched

    Note: while resizing the PVC you might get an error disk resize is only supported on Unattached disk, current disk state: Attached (see below for more details).

    In this specific case you need to temporary scale down the StatefulSet replica value to zero. This will briefly disrupt the Kafka service availability and all the events after this point will be dropped as event ingestion will stop working

    You can do that by running: kubectl -n posthog patch statefulset posthog-posthog-kafka -p '{ "spec": { "replicas": 0 }}'

    After you successfully resized the PVC, you can restore the initial replica definition with: kubectl -n posthog patch statefulset posthog-posthog-kafka -p '{ "spec": { "replicas": 1 }}'

  4. Delete the StatefulSet definition but leave its pods online (this is to avoid an impact on the ingestion pipeline availability): kubectl -n posthog delete sts --cascade=orphan posthog-posthog-kafka

  5. In your Helm chart configuration, update the kafka.persistence value in value.yaml to the target size (20G in this example). You might want to update the retention policy too, more info here

  6. Run a helm upgrade to recycle all the pods and re-deploy the StatefulSet definition

  7. Connect to the Kafka container to verify the new filesystem size

    kubectl -n posthog exec -it posthog-posthog-kafka-0 -- /bin/bash
    posthog-posthog-kafka-0:/$ df -h /bitnami/kafka
    Filesystem Size Used Avail Use% Mounted on
    /dev/disk/by-id/scsi-0DO_Volume_pvc-97776a5e-9cdc-4fac-8dad-199f1728b857 20G 40M 19G 1% /bitnami/kafka


Was this page useful?

Next article

Log retention

When Kafka's disk gets full, the service can get stuck, leading us to drop all incoming events. To mitigate the issue, we can edit Kafka's log retention policies to free up some space. There are two configs we can set (both set minimum values and data can't be deleted beforehand): time - kafka docs bytes - kafka docs Note that the retention check loop by default is ran every 5min retention check interval , we can change it to be more frequent, but probably don't need to. We want to minimize…

Read next article