ClickHouse Engineer
Department
Engineering
Location
Remote (US, UK, EMEA)
Timezone(s)
GMT +2:00 to GMT -8:00
About PostHog
We’re equipping every developer to build successful products by giving them a suite of products to analyze, test, observe, and deploy new features. We currently offer product and web analytics, session replay, feature flags, experiments, a CDP, SQL access, a data warehouse, and LLM observability… and there’s plenty more to come.
PostHog was created as an open-source project during Y Combinator's W20 cohort. We had the most successful B2B software launch on HackerNews since 2012 with a product that was just 4 weeks old. Since then, more than 100,000 companies have installed the platform. We've had huge success with our paid upgrades, raised bags of money from some of the world's top investors, and have extremely strong product-led growth – 97% driven by word of mouth.
We're growing quickly, but sustainably. We're also default alive, averaging 10% monthly revenue growth and with more than $20m ARR. We're staying focused on building an awesome product for end users, hiring a handful of exceptional team members, and seeing fantastic growth as a result.
What we value
We are open source - building a huge community around a free-for-life product is key to PostHog's strategy.
We aim to become the most transparent company, ever. In order to enable teams to make great decisions, we share as much information as we can. In our public handbook everyone can read about our roadmap, how we pay (or even let go of) people, what our strategy is, and who we have raised money from.
Working autonomously and maximizing impact - we don’t tell anyone what to do. Everyone chooses what to work on next based on what is going to have the biggest impact on our customers.
Solve big problems - we haven't built our defining feature yet. We are all about shipping fast, talking to users, and iterating.
Who we’re looking for
The Clickhouse team is responsible for the Clickhouse cluster and query layer that all other teams (and all customers) use to store and query data. You’ll help manage and operate the cluster, figure out the best architecture, and make it 🚀
What you’ll be doing
Develop tooling for full and incremental backup and restore processes for ClickHouse clusters.
Create schema and mutation management tools to make it easy for teams at PostHog to manage their own tables.
Enhance visibility into cluster statuses.
Automate dynamic provisioning of instances, utilizing Terraform and Ansible.
Build query benchmarking and performance tooling to help engineers in identifying and optimize expensive queries.
Make everything—schema, queries, and cluster configurations—faster and more efficient.
Requirements
Proficiency in Python, Kubernetes, and AWS.
Experience building and operating high-scale complex data storage solutions
Strong interest and experience in ClickHouse (or similar OLAP databases) internals and query performance optimization.
Can thrive in a culture of autonomy and self-direction.
Nice to have
Experience with Terraform and Ansible for infrastructure automation.
We believe people from diverse backgrounds, with different identities and experiences, make our product and our company better. That’s why we dedicated a page in our handbook to diversity and inclusion. No matter your background, we'd love to hear from you! Alignment with our values is just as important as experience! 🙏
Also, if you have a disability, please let us know if there's any way we can make the interview process better for you - we're happy to accommodate!
Salary
We have a set system for compensation as part of being transparent. Salary varies based on location and level of experience.
Location
(based on market rates)Level
Step
Salary calculator
- Benchmark (United States - San Francisco, California) $243,000
- Level modifier 1
- Step modifier 0.95 - 1.04
Benefits
Generous, transparent compensation & equity
Unlimited vacation (with a minimum!)
Two meeting-free days per week
Home office
Coworking credit
Private health, dental, and vision insurance.
Training budget
Access to our Hedge House
Carbon offsetting
Pension & 401k contributions
We hire and pay locally
Company offsites
Get more details about all our benefits on the Careers page.
Your team's mission and objectives
Data at PostHog - Mission
Data Team's mission is to provide a storage and query engine that meets these requirements:
- Continue to meet the needs of the product today now and in the future
- Maintain and optimize our current ClickHouse deployment
- Elastically scale our capacity with little effort
- Support multiple query quality of service (QOS) guarantees (Real-time, Batch, etc.)
- Data is stored once and queryable from the appropriate tool
- Queries are optimized for cost and performance
- Tunable execution performance to allow trade-offs between cost and performance
- Storage is durable
Data Team's Mission at PostHog
Data Team's mission is to provide a storage and query engine that meets these requirements:
- Continue to meet the needs of the product today now and in the future
- Maintain and optimize our current ClickHouse deployment
- Elastically scale our capacity with little effort
- Support multiple query quality of service (QOS) guarantees (Real-time, Batch, etc.)
- Data is stored once and queryable from the appropriate tool
- Queries are optimized for cost and performance
- Tunable execution performance to allow trade-offs between cost and performance
- Storage is durable
In service of this mission, our goals are:
Goals for Q2 2025:
Query Observability and Performance Improvements
- We want to know where the query is coming from and why. Is it per team or posthog wide, is it by our system or by customer?
- Add additional metadata tags to better identify and categorize API query sources
Make ClickHouse ops easy
- Complete cluster management through Infrastructure as Code (IaC)
- Optimize speed of critical operational procedures incl. automation for common operations (e.g. ZK)
- Implement ClickHouseKeeping automation via Dagster (cleanup jobs, backups, improve replication performance)
- Better runbooks for everyone
- Build automation for handling ad-hoc deletion requests and disk provisioning
- Storage infrastructure improvements (s3-backed events, get rid of io2)
- Switch from ZooKeeper to ClickHouse Keeper
Use Altinity Antalya in production for events
- upgrade clusters to Antalya
- get DW queries running through Antalya Swarm
- Backfill all events onto Iceberg in S3
Interview process
We do 2-3 short interviews, then pay you to do some real-life (or close to real-life) work.
- 1
Application (You are here)
Our talent team will review your applicationWe're looking to see how your skills and experience align with our needs.
- 2
Culture interview
30-min video callOur goal is to explore your motivations to join our team, learn why you’d be a great fit, and answer questions about us.
- 3
Technical interview
45 minutes, varies by roleYou'll meet the hiring team who will evaluate skills needed to be successful in your role. No live coding.
- 4
Founder interview
30 minutesYou have reached the final boss. It's time to chat with James or Tim.
- 5
PostHog SuperDay
Paid day of workYou’ll meet a few more members of the team and work on an independent project. It's challenging, but most people say it's fun!
- 6
Offer
Pop the champagne (after you sign)If everyone is happy, we’ll make you an offer to join us - YAY!
Apply
(Now for the fun part...)
Just fill out this painless form and we'll get back to you within a few days. Thanks in advance!
Bolded fields are required