Deploying to Digital Ocean

Last updated:

Digital Ocean is one of the most well-established Cloud Providers. Compared to AWS, GCP and Azure where the amount of options and configuration can be overwhelming, Digital Ocean is generally simpler to use and faster to get running.

The first thing you'll need is a get a Digital Ocean account. You can click on the badge below to get US$100 in credit over 60 days (i.e. run PostHog for free for ~2 months).

DigitalOcean Referral Badge

Then you can either follow the 1-click install or the manual install.

1-click install

There is a 1-click install option to deploy PostHog via the DigitalOcean Marketplace. The DigitalOcean UI will ask you if you want to install PostHog on an already provisioned Kubernetes cluster or if you want to create a new one.

Otherwise if you have doctl already configured you can simply run (this will also configure your kubectl access):

doctl kubernetes cluster create \
posthog-cluster \
--count=2 \
--size="s-2vcpu-4gb" \
--1-clicks=posthog

Once the setup is completed, you can pick one of those two methods to fetch the URL of your fresh new PostHog installation:

  1. In the DigitalOcean web console (networking tab), navigate to the IP address of the load balancer created by Kubernetes:

    DigitalOcean External IP location

  2. If you have configured kubectl access to your cluster, run the following commands:

    POSTHOG_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}" 2> /dev/null)
    POSTHOG_HOSTNAME=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" 2> /dev/null)
    if [ -n "$POSTHOG_IP" ]; then
    POSTHOG_INSTALLATION=$POSTHOG_IP
    fi
    if [ -n "$POSTHOG_HOSTNAME" ]; then
    POSTHOG_INSTALLATION=$POSTHOG_HOSTNAME
    fi
    if [ ! -z "$POSTHOG_INSTALLATION" ]; then
    echo -e "\n----\nYour PostHog installation is available at: http://${POSTHOG_INSTALLATION}\n----\n"
    else
    echo -e "\n----\nUnable to find the address of your PostHog installation\n----\n"
    fi

Note: before using PostHog in production remember to secure your 1-click installation.

Note: Without securing your instance you will only be able to test web apps on HTTP such as those running on localhost.

Securing your 1-click install

It's unfortunately not yet possible to provide parameters to DigitalOcean, so we need a post-install step to enable TLS. In order to do this you will need kubectl and Helm installed and have configured your kubectl access.

1. Update PostHog

Create a values.yaml file in the current directory with the following content:

cloud: "do"
ingress:
hostname: <your-hostname>
nginx:
enabled: true
cert-manager:
enabled: true

Note: if you are planning to use our GeoIP integration, please also add the snippet below to enable proxy protocol support in the load balancer and in the nginx ingress controller:

#
# For DigitalOcean LB (TCP) mode, we need to enable some additional config
# in the ingress controller in order to get the proper IP address forwarded
# to our app. Otherwise we'll get the load balancer nodes addresses instead.
#
# ref:
# - https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address
# - https://docs.digitalocean.com/products/networking/load-balancers/
#
#
# Additionally we'll also enable pod communication through the Load Balancer
# to ensure Let's Encrypt can reach the cert-manager Pod validating our domain.
#
# ref:
# - https://github.com/kubernetes/ingress-nginx/issues/3996
# - https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes#step-5-%E2%80%94-enabling-pod-communication-through-the-load-balancer-optional
#
ingress-nginx:
controller:
config:
use-proxy-protocol: true
service:
annotations:
service.beta.kubernetes.io/do-loadbalancer-hostname: <your-hostname>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

and then run:

helm repo add posthog https://posthog.github.io/charts-clickhouse/
helm repo update
helm upgrade -f values.yaml --timeout 20m --namespace posthog posthog posthog/posthog --atomic --wait --wait-for-jobs --debug

2. Install ClusterIssuer

Create a new cluster resource that will take care of signing your TLS certificates using Let’s Encrypt.

  1. Create a new file called cluster-issuer.yaml with the following content. Note: please remember to replace your-name@domain.com with a valid email address as you will receive email notifications on certificate renewals:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
    name: letsencrypt-prod
    spec:
    acme:
    email: "your-name@domain.com"
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
    name: posthog-tls
    solvers:
    - http01:
    ingress:
    class: nginx
  2. Deploy this new resource to your cluster by running: kubectl apply -f cluster-issuer.yaml

3. Lookup the address of the installation

POSTHOG_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}" 2> /dev/null)
POSTHOG_HOSTNAME=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" 2> /dev/null)
if [ -n "$POSTHOG_IP" ]; then
POSTHOG_INSTALLATION=$POSTHOG_IP
fi
if [ -n "$POSTHOG_HOSTNAME" ]; then
POSTHOG_INSTALLATION=$POSTHOG_HOSTNAME
fi
if [ ! -z "$POSTHOG_INSTALLATION" ]; then
echo -e "\n----\nYour PostHog installation is available at: http://${POSTHOG_INSTALLATION}\n----\n"
else
echo -e "\n----\nUnable to find the address of your PostHog installation\n----\n"
fi

4. Setting up DNS

Create the record of your desired hostname pointing to the address found above.

After around 30 minutes (required to request, receive and deploy the TLS certificate) you should have a fully working and secure PostHog instance available at the domain record you've chosen!

Manual install

First, we need to set up a Kubernetes cluster (see the official DigitalOcean documentation for more info).

Cluster requirements

  • Kubernetes version >=1.19 <= 1.22

  • Ensure your cluster has enough resources to run PostHog (we suggest a total minimum of 4 vcpu & 8GB of memory)

  • Suggestion: ensure allowVolumeExpansion is set to True in the storage class definition (this setting enables PVC resize)

    PersistentVolumes can be configured to be expandable. This feature when set to true, allows the users to resize the volume by editing the corresponding PersistentVolumeClaims object.

    This can become useful in case your storage usage grows and you want to resize the disk on-the-fly without having to resync data across PVCs.

    To verify if your storage class allows volume expansion you can run:

    kubectl get storageclass -o json | jq '.items[].allowVolumeExpansion'
    true

    In case it returns false, you can enable volume expansion capabilities for your storage class by running:

    kubectl patch storageclass -p '{"allowVolumeExpansion": true}'
    storageclass.storage.k8s.io/gp2 patched

    N.B:

    • expanding a persistent volume is a time consuming operation
    • some platforms have a per-volume quota of one modification every 6 hours
    • not all the volume types support this feature. Please take a look at the official docs for more info
  • Suggestion: ensure reclaimPolicy is set to Retain in the storage class definition (this setting allows for manual reclamation of the resource)

    The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume (see the official documentation).

    This can become useful in case your need to reprovision a pod/statefulset but you don't want to lose the underlying data

    To verify which reclaimPolicy your default storage class is using you can run:

    kubectl get storageclass -o json | jq '.items[].reclaimPolicy'
    "Retain"

    If your storage class allows it, you can modify the reclaimPolicy by running:

    kubectl patch storageclass -p '{"reclaimPolicy": "Retain"}'
    storageclass.storage.k8s.io/gp2 patched

Note: in order to reduce the overhead of managing stateful services like PostgreSQL, Kafka, Redis and ClickHouse by yourself, we suggest you to run them outside Kubernetes and offload their provisioning, building and maintenance operations:

1. Chart configuration

Here's the minimal required values.yaml that we'll be using later. You can find an overview of the parameters that can be configured during installation under configuration.

cloud: "do"
ingress:
hostname: <your-hostname>
nginx:
enabled: true
cert-manager:
enabled: true

Note: if you are planning to use our GeoIP integration, please also add the snippet below to enable proxy protocol support in the load balancer and in the nginx ingress controller:

#
# For DigitalOcean LB (TCP) mode, we need to enable some additional config
# in the ingress controller in order to get the proper IP address forwarded
# to our app. Otherwise we'll get the load balancer nodes addresses instead.
#
# ref:
# - https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address
# - https://docs.digitalocean.com/products/networking/load-balancers/
#
#
# Additionally we'll also enable pod communication through the Load Balancer
# to ensure Let's Encrypt can reach the cert-manager Pod validating our domain.
#
# ref:
# - https://github.com/kubernetes/ingress-nginx/issues/3996
# - https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes#step-5-%E2%80%94-enabling-pod-communication-through-the-load-balancer-optional
#
ingress-nginx:
controller:
config:
use-proxy-protocol: true
service:
annotations:
service.beta.kubernetes.io/do-loadbalancer-hostname: <your-hostname>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

2. Install the chart

To install the chart using Helm with the release name posthog in the posthog namespace, run the following:

helm repo add posthog https://posthog.github.io/charts-clickhouse/
helm repo update
helm upgrade --install -f values.yaml --timeout 20m --create-namespace --namespace posthog posthog posthog/posthog --wait --wait-for-jobs --debug
Troubleshooting

If you don't see some pods or services come up (e.g. chi-posthog-posthog-0-0-0 pod or clickhouse-posthog service is missing), try running helm upgrade again.

3. Lookup the address of the installation

Note: due to a limitation in the DigitalOcean ingress controller, the second method will not work if you have enabled the settings to support the GeoIP integration. Please use the web UI to get your IP address.

4. Setting up DNS

Create the record of your desired hostname pointing to the address found above.

After around 30 minutes (required to request, receive and deploy the TLS certificate) you should have a fully working and secure PostHog instance available at the domain record you've chosen!

Troubleshooting

I cannot connect to my PostHog instance after creation

As a troubleshooting tool, you can allow HTTP access by setting these values in your values.yaml, but we recommend always accessing PostHog via HTTPs.

ingress:
nginx:
enabled: true
redirectToTLS: false
letsencrypt: false
web:
secureCookies: false

After upgrading you can run the following command to get the URL to access PostHog:

POSTHOG_IP=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].ip}" 2> /dev/null)
POSTHOG_HOSTNAME=$(kubectl get --namespace posthog ingress posthog -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" 2> /dev/null)
if [ -n "$POSTHOG_IP" ]; then
POSTHOG_INSTALLATION=$POSTHOG_IP
fi
if [ -n "$POSTHOG_HOSTNAME" ]; then
POSTHOG_INSTALLATION=$POSTHOG_HOSTNAME
fi
if [ ! -z "$POSTHOG_INSTALLATION" ]; then
echo -e "\n----\nYour PostHog installation is available at: http://${POSTHOG_INSTALLATION}\n----\n"
else
echo -e "\n----\nUnable to find the address of your PostHog installation\n----\n"
fi

Upgrading the chart

To upgrade the chart using Helm with the release name posthog in posthog namespace, do the following:

  1. Get and update the helm repo:
helm repo add posthog https://posthog.github.io/charts-clickhouse/
helm repo update
  1. Check if it's going to be a major version upgrade:
helm list -n posthog
helm search repo posthog

Compare the numbers for the chart version (in the format posthog-{major}.{minor}.{patch} - for example, posthog-3.15.1) when running the commands above. If the upgrade is for a major version, check the upgrade notes before moving forward.

  1. Run the upgrade
helm upgrade -f values.yaml --timeout 20m --namespace posthog posthog posthog/posthog --atomic --wait --wait-for-jobs --debug

Check the Helm documentation for more info about the helm upgrade command.

TroubleshootingIf you see this error
Error: UPGRADE FAILED: release posthog failed, and has been rolled back due to atomic being set: post-upgrade hooks failed: warning: Hook post-upgrade posthog/templates/migrate.job.yaml failed: jobs.batch "posthog-migrate" already exists

it happens because the migrate job was left around from a previous upgrade attempt, we just need to kill that job (kubectl -n posthog delete job $(kubectl -n posthog get jobs --no-headers -o custom-columns=NAME:.metadata.name)) before running the upgrade again.

Uninstalling the chart

To uninstall the chart with the release name posthog in posthog namespace, run the following:

helm uninstall posthog --namespace posthog

See the Helm docs for documentation on the helm uninstall command.

The command above removes all the Kubernetes components associated with the chart and deletes the release. Sometimes everything doesn't get properly removed. If that happens try deleting the namespace:

kubectl delete namespace posthog

Ask a question