3 months ago
Base Salary
$175k - $225k/yr
Responsibilities
- Build and extend batch pipelines using dbt and Dagster.
- Develop and optimize BigQuery data models for analytics and reporting.
- Implement and maintain Kafka/PubSub + Flink pipelines for real-time data processing.
- Design data platform standards and best practices for batch and streaming.
- Implement monitoring, alerting, and SLAs/SLOs for data quality.
- Collaborate with analytics, product, and engineering teams to onboard new data sources.
- Own platform operations including performance tuning and cost optimization.
- Design a unified serving layer architecture for consistent datasets.
- Establish strong data governance and observability practices.
Requirements
- Strong proficiency in SQL for advanced querying and data modeling.
- Hands-on experience with dbt for data transformations.
- Experience with batch orchestration tools like Dagster or Airflow.
- Proficiency in Python for data engineering tasks.
- Deep familiarity with BigQuery or equivalent cloud data warehouse tools.
- Solid experience with GCP infrastructure and security practices.
- Strong engineering fundamentals including version control and testing.
Benefits
- Competitive salary and stock options.
- Flexible vacation policy promoting rest and recharging.
- Remote-first work environment with virtual and in-person events.
- Comprehensive health, dental, and vision insurance.
- 16 weeks of parental leave for all parents.
