Developing locally

Last updated:

❗️ This guide is intended only for development of PostHog itself. If you're looking to deploy PostHog for your product analytics needs, go to Self-host PostHog.

What does PostHog look like?

Before jumping into setup, let's dissect a PostHog. The app itself is made up of 4 components that run simultaneously:

  • Django server
  • Celery worker (handles execution of background tasks)
  • Node.js plugin server (handles event ingestion and plugins)
  • React frontend built with Node.js

These components rely on a few external services:

  • PostgreSQL – for storing ordinary data (users, projects, saved insights)
  • Redis – for caching and inter-service communication
  • ClickHouse – for storing big data (events, persons – analytics queries)
  • Kafka – for queuing events for ingestion
  • Zookeeper – for coordinating Kafka and ClickHouse clusters

When spinning up an instance of PostHog for development, we recommend the following configuration:

  • the external services run in Docker over docker compose
  • PostHog itself runs on the host (your system)

This is what we'll be using in the guide below.

It is also technically possible to run PostHog in Docker completely, but syncing changes is then much slower, and for development you need PostHog dependencies installed on the host anyway (such as formatting or typechecking tools). The other way around – everything on the host, is not practical due to significant complexities involved in instantiating Kafka or ClickHouse from scratch.

The instructions here assume you're running macOS.

For Linux, adjust the steps as needed (e.g. use apt or pacman in place of brew).

Windows isn't supported natively. But, Windows users can run a Linux virtual machine. The latest Ubuntu Desktop is recommended. (Ubuntu Server is not recommended as debugging the frontend will require a browser that can access localhost.)

In case some steps here have fallen out of date, please tell us about it – feel free to submit a patch!

macOS prerequisites

  1. Install Xcode Command Line Tools if you haven't already: xcode-select --install.

  2. Install the package manager Homebrew by following the instructions here.

After installation, make sure to follow the instructions printed in your terminal to add Homebrew to your $PATH. Otherwise the command line will not know about packages installed with brew.
  1. Install Docker Desktop and in its settings give Docker at least 4 GB of RAM (or 6 GB if you can afford it) and at least 4 CPU cores.

  2. Append line kafka clickhouse to /etc/hosts. You can do it in one line with:

    sudo echo ' kafka clickhouse' | sudo tee -a /etc/hosts

    ClickHouse and Kafka won't be able to talk to each other without these mapped hosts.

  3. Clone the PostHog repository. All future commands assume you're inside the posthog/ folder.

    git clone && cd posthog/

Get things up and running

1. Spin up external services

In this step we will install Postgres, Redis, ClickHouse, Kafka and Zookeeper via Docker.

First, run the services in Docker:

# ARM (arm64) systems (e.g. Apple Silicon)
docker compose -f docker-compose.arm64.yml pull zookeeper kafka clickhouse db redis
docker compose -f docker-compose.arm64.yml up zookeeper kafka clickhouse db redis
# x86 (amd64) systems
docker compose -f pull zookeeper kafka clickhouse db redis
docker compose -f up zookeeper kafka clickhouse db redis

Friendly tip 1: If you see Error while fetching server API version: 500 Server Error for http+docker://localhost/version:, it's likely that Docker Engine isn't running.

Friendly tip 2: If you see "Exit Code 137" anywhere, it means that the container has run out of memory. In this case you need to allocate more RAM in Docker Desktop settings.

Friendly tip 3: You might need sudo – see Docker docs on managing Docker as a non-root user.

Second, verify via docker ps and docker logs (or via the Docker Desktop dashboard) that all these services are up and running. They should display something like this in their logs:

# docker ps
b0f72510b818 posthog/clickhouse:v21.9.2.17-stable "/" 26 hours ago Up 21 hours>8123/tcp,>9000/tcp,>9009/tcp,>9440/tcp ee_clickhouse_1
12d146b93d69 wurstmeister/kafka "" 26 hours ago Up 21 hours>9092/tcp ee_kafka_1
432afd46fc93 postgres:12-alpine "docker-entrypoint.s…" 26 hours ago Up 21 hours>5432/tcp ee_db_1
cdbf065ffa3f zookeeper "/docker-entrypoint.…" 26 hours ago Up 21 hours 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp ee_zookeeper_1
ff77dcf7facd redis:alpine "docker-entrypoint.s…" 26 hours ago Up 21 hours>6379/tcp ee_redis_1
# docker logs ee_db_1 -n 1
2021-12-06 13:47:08.325 UTC [1] LOG: database system is ready to accept connections
# docker logs ee_redis_1 -n 1
1:M 06 Dec 2021 13:47:08.435 * Ready to accept connections
# docker logs ee_clickhouse_1 -n 1
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/users.xml'.
# docker logs ee_kafka_1
[2021-12-06 13:47:23,814] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
# docker logs ee_zookeeper_1
# Because ClickHouse and Kafka connect to Zookeeper, there will be a lot of noise here. That's good.

Friendly tip: Kafka is currently the only x86 container used, and might segfault randomly when running on ARM. Restart it when that happens.

Finally, install Postgres locally. Even planning to run Postgres inside Docker, we need a local copy of Postgres (version 11+) for its CLI tools and development libraries/headers. These are required by pip to install psycopg2.

On macOS you can just run:

brew install postgresql

This installs both the Postgres server and its tools. DO NOT start the server after running this.

On Linux you often have separate packages: postgres for the tools, postgres-server for the server, and libpostgres-dev for the psycopg2 dependencies. Consult your distro's list for an up-to-date list of pacakages.

2. Prepare the frontend

  1. Install nvm: If using fish, you may instead prefer
After installation, make sure to follow the instructions printed in your terminal to add NVM to your $PATH. Otherwise the command line will use your system Node.js version instead.
  1. Install the latest Node.js 14 (the version used by PostHog in production) with nvm install 14. You can start using it in the current shell with nvm use 14.

  2. Install yarn with npm install -g yarn@1.

  3. Install Node packages by running yarn.

  4. Run yarn typegen:write to generate types for Kea state management logics used all over the frontend.

The first time you run typegen, it may get stuck in a loop. If so, cancel the process (Ctrl+C), discard all changes in the working directory (git reset --hard), and run yarn typegen:write again. You may need to discard all changes once more when the second round of type generation completes.

3. Prepare the plugin server

Assuming Node.js is installed, run yarn --cwd plugin-server to install all required packages. We'll run this service in a later step.

4. Prepare the Django server

  1. Install Python 3.9+. You can do so with Homebrew: brew install python@3.9. Make sure when outside of venv to always use python3 instead of python, as the latter may point to Python 2.x on some systems.

    Friendly tip: Need to manage multiple versions of Python on a single machine? Try pyenv.

  2. Create the virtual environment in current directory called 'env':

    python3 -m venv env
  3. Activate the virtual environment:

    # For bash/zsh/etc.
    source env/bin/activate
    # For fish
    source env/bin/
  4. Install requirements with pip

    If your workstation is ARM-based (e.g. Apple Silicon), the first time your run pip install you must pass it custom OpenSSL headers:

    brew install openssl
    CFLAGS="-I /opt/homebrew/opt/openssl/include" LDFLAGS="-L /opt/homebrew/opt/openssl/lib" GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 pip install -r requirements.txt

    These will be used when installing grpcio and psycopg2. After doing this once, and assuming nothing changed with these two packages, next time simply run:

    pip install -r requirements.txt

    If on an x86 platform, simply run the latter version.

  5. Install dev requirements

    pip install -r requirements-dev.txt
  6. (optional) SAML support

    If you want to run PostHog with the full feature set, you'll need to install a few dependencies for SAML to work. If you're on macOS, run the command below, otherwise check the official xmlsec repo for more details.

    brew install libxml2 libxmlsec1 pkg-config
    pip install python3-saml==1.12.0

5. Prepare databases

We now have the backend ready, and Postgres and ClickHouse running – these databases are blank slates at the moment however, so we need to run migrations to e.g. create all the tables.

If on ARM, first run export SKIP_SERVICE_VERSION_REQUIREMENTS=1 as ClickHouse 21.9 – which is the first ARM-compatible version of ClickHouse – isn't officially supported by PostHog yet.

DEBUG=1 python migrate
DEBUG=1 python migrate_clickhouse

Friendly tip: The error fe_sendauth: no password supplied connecting to Postgres happens when the database is set up with a password and the user:pass isn't specified in DATABASE_URL. Try export DATABASE_URL=postgres://posthog:posthog@localhost:5432/posthog.

Another friendly tip: You may run into psycopg2 errors while migrating on an ARM machine. Try out the steps in this comment to resolve this.

6. Start PostHog

Now start all of PostHog (backend, worker, plugin server, and frontend – simultaneously) with:


Open http://localhost:8000 to see the app.

Note: The first time you run this command you might get an error that says "layout.html is not defined". Make sure you wait until the frontend is finished compiling and try again.

To see some data on the frontend, you should go to the http://localhost:8000/demo and play around with it. This will give you a Hogflix test project containing some data data in the app.

7. Develop

This is it! You can now change PostHog in any way you want. To commit changes, create a new branch based on master for your intended change, and develop away. Just make sure not use to use release-* patterns in your branches unless putting out a new version of PostHog, as such branches have special handling related to releases.


For a PostHog PR to be merged, all tests must be green, and ideally you should be introducing new ones as well – that's why you must be able to run tests with ease.

For backend, simply use:


You can narrow the run down to only files under matching paths:

pytest posthog/test/

Or to only test cases with matching function names:

pytest posthog/test/ -k test_inclusion

To see debug logs (such as ClickHouse queries), add argument --log-cli-level=DEBUG.

For Cypress end-to-end test, run bin/e2e-test-runner. This will temporarily install required dependencies inside the project, spin up a test instance of PostHog, and show you the Cypress interface, from which you'll manually choose tests to run.

Once you're done, terminate the command with cmd + C. Be sure to wait until the command terminates gracefully so temporary dependencies are removed from package.json and you don't commit them accidentally.

For frontend tests, all you need is yarn test.

Extra: Working with feature flags

When developing locally with environment variable DEBUG=1 (which enables a setting called SELF_CAPTURE), all analytics inside your local PostHog instance is based on that instance itself – more specifically, the currently selected project. This means that your activity is immediately reflected in the current project, which is potentially useful for testing features – for example, which feature flags are currently enabled for your development instance is decided by the project you have open at the very same time.

So, when working with a feature based on feature flag foo-bar, add a feature flag with this key to your local instance and release it there.

If you'd like to have ALL feature flags that exist in PostHog at your disposal right away, run python3 sync_feature_flags – they will be added to each project in the instance, fully rolled out by default.

Extra: Debugging the backend in PyCharm

With PyCharm's built in support for Django, it's fairly easy to setup debugging in the backend. This is especially useful when you want to trace and debug a network request made from the client all the way back to the server. You can set breakpoints and step through code to see exactly what the backend is doing with your request.

  1. Setup Django configuration as per JetBrain's docs.
  2. Click Edit Configuration to edit the Django Server configuration you just created.
  3. Point PyCharm to the project root (posthog/) and settings (posthog/posthog/ file.
  4. Add these environment variables

Ask a question