Senior Data Engineer
About the Role
We are seeking a Senior Data Engineer to architect, build, and own a complete data ecosystem in a fast-growing, cloud-native environment. This role offers the rare opportunity to design and implement data infrastructure from the ground up, with a focus on AWS technologies such as Redshift, DynamoDB, and RDS. You’ll also develop reverse ETL pipelines to operationalize insights across core systems, helping shape how data drives decision-making company-wide.
You will work remotely from Poland, collaborating cross-functionally with product, engineering, and growth teams. Your work will directly influence company strategy and empower every team to make informed, data-driven decisions.
Key Responsibilities
Data Ownership and Architecture
- Serve as the primary data engineer responsible for designing and maintaining the company’s end-to-end data infrastructure.
- Define and implement scalable, high-performance data systems capable of supporting millions of users.
- Make architectural decisions that set the foundation for long-term data strategy.
AWS Data Infrastructure
- Design and optimize AWS Redshift as the central analytics warehouse.
- Build ingestion pipelines from RDS, DynamoDB, and other NoSQL sources.
- Leverage AWS services (Lambda, Glue, Kinesis, S3) for a modern and efficient data stack.
- Manage query performance tuning, cluster scaling, and cost optimization.
Reverse ETL and Data Activation
- Build reverse ETL pipelines to operationalize analytics data.
- Sync enriched datasets to production systems, CRM, and customer engagement tools.
- Implement real-time data activation and feedback loops that improve user experience.
Scalability and Governance
- Rapidly prototype and deploy data solutions to support business growth.
- Develop self-service analytics capabilities for non-technical teams.
- Establish and maintain scalable data governance and data quality frameworks.
Technical Leadership
- Define and enforce best practices in data engineering and cloud infrastructure.
- Mentor engineers on AWS and data technologies.
- Translate complex business needs into technical solutions.
- Champion data quality, consistency, and reliability across all systems.
Requirements
- 5+ years of hands-on experience with AWS Redshift, including performance tuning and cluster management.
- 3+ years working with AWS RDS and DynamoDB, including NoSQL modeling.
- Proven experience implementing reverse ETL pipelines and data activation frameworks.
- Strong programming skills in Python or equivalent language for data pipelines.
- Experience independently leading data infrastructure projects or serving as a solo data engineer.
- Strong knowledge of modern data stack tools (e.g., dbt, Airflow, Fivetran, Census).
- Familiarity with streaming architectures, real-time analytics, and data warehousing principles.
- Proficiency with Infrastructure as Code (Terraform preferred).
- Excellent communication skills and ability to work effectively in remote, distributed teams.
- Experience working in SaaS or scale-up environments, including familiarity with SaaS metrics and data challenges.
Technical Competencies
- Advanced SQL and query optimization.
- Experience with CDC (Change Data Capture) and API-based data services.
- Knowledge of data quality frameworks and metadata management.
- Cost optimization for AWS data workloads.
- Understanding of SaaS data models, including multi-tenancy and subscription analytics.
Preferred Qualifications
- Experience with AWS Kinesis or other real-time streaming platforms.
- Exposure to ML pipelines or AWS SageMaker.
- Familiarity with customer data platforms (CDPs).
- Experience in product analytics and experimentation systems.
- Contributions to open-source data tools or frameworks.
Required languages
| English | B2 - Upper Intermediate |