B2B product with 250+ business partners in 90 countries.
A team of 130+ bright-minded professionals eager to challenge everything you know.
Does that sound like a dream place to you? Read up, buddy!
We help ambitious entrepreneurs to start a ride-hailing (taxi). Our SaaS solution and marketing assistance is what we provide our business partners with.
To do that right, we’ve got:
- 14 years of expertise in developing an all-in-one platform
- Amazing team of specialists working and playing hard
- Opportunity for all employees to influence the product and take initiative
- Focus on quality, no strict deadlines
- Flexible management
- Healthy work-life balance
About the role:
We’re rebuilding our analytics platform around a self-hosted ClickHouse cluster and want an engineer who will design, build, and evolve that stack. You’ll collaborate with data analysts, backend engineers and DevOps, drive architectural decisions, and keep our data reliable and fast.
What you’ll do:
- Design and grow the ClickHouse cluster – sharding, replication, partitioning, TTL policies, materialized views
- Build data-ingestion pipelines from backend CDC sources, event streams, and external services; ensure they stay scalable and cost-efficient
- Own ELT workflows in dbt and an orchestration tool: from models and tests to automated schema migrations and releases
- Safeguard data quality by implementing automated freshness/completeness checks, detecting schema drift, and leading incident analysis when issues arise
- Tune performance by profiling heavy queries, advising analysts on SQL best practices, controlling storage growth
- Secure sensitive data – RBAC, column-level masking, audit policies, GDPR-friendly practices
- Automate docs & lineage (dbt docs / OpenMetadata) so everyone always knows where data comes from and how to use it
You’re a great fit if you have:
- 3+ years of production ClickHouse experience - deployment, tuning, sharding, replication
- Hands-on experience with Kafka and CDC tools (e.g. Debezium)
- Dbt expertise & orchestration‑tool skills: writing DAGs, tests, docs, zero-downtime migrations
- Advanced SQL (ClickHouse), solid Python (or another scripting language) for orchestration & DQ, and strong Java for building data-ingestion pipelines
- Proven track record in data quality / observability and securing PII
- Ability to work autonomously, propose architecture improvements, and back decisions with data
Nice to have:
- OpenMetadata, Great Expectations or similar tooling
- Experience mentoring teammates or growing a data function from scratch
Why you’ll enjoy working with us:
- Influence, don’t just maintain – your ideas directly shape our platform
- Green-field stack – minimal legacy, freedom to choose the best approach
- Career headroom – opportunity to grow as a leader as the platform and team scale