Senior Data Engineer
$$$$
We are looking for a Senior Data Engineer to join a project in the tech capital of the world β Silicon Valley.
Project Idea
The project was founded back in 2014 with the goal of connecting private and government universities with regular people like we are. You have a variety of auditoriums, gyms, classrooms, and other venue options available for community use β
schedule facility uses and manage requests from the community all in one place.
Just imagine that youβre a football player and you can rent a football field at Harvard to play with your friends. Amazing, right?
Must have:
- 4+ years of experience in Data Engineering or related roles;
- Strong SQL skills and hands-on experience with PostgreSQL (including Aurora Serverless);
- Solid knowledge of Python for data processing and automation;
- Experience building and maintaining ETL/ELT pipelines using cloud-native tools (e.g., AWS Lambda, S3, SQS);
- Proven experience working with MongoDB and MongoDB Atlas, including event-driven architectures using Atlas Triggers, Stream Processing;
- Proficiency with dbt for building modular, testable, and well-documented data transformation workflows;
- Good understanding of data modeling principles for OLAP/OLTP systems, including normalization and dimensional modeling;
- Demonstrated experience designing and implementing data warehouses and data marts;
- Working knowledge of Node.js, particularly in backend logic tied to data ingestion or transformation workflows;
- Familiarity with cloud data platforms (e.g., AWS) and serverless computing patterns;
- A technical degree (e.g., Computer Science, Engineering, Math) is a plus;
- Upper-Intermediate English or higher for effective communication and documentation.
Soft skills:
- Proactive β you take ownership and act without waiting for direction;
- Detail-oriented β you deliver accurate, high-quality work;
- Initiative-driven β youβre eager to improve processes and take action.
Responsibilities:
- Design, implement, and maintain scalable and reliable data pipelines using Python, dbt, and AWS Lambda;
- Build and optimize data architectures to support analytics, reporting, and machine learning use cases, including data warehouse and data mart modeling on PostgreSQL (Aurora Serverless);
- Develop and manage ELT workflows that extract data from MongoDB (using Atlas triggers) and load into staging and production layers in PostgreSQL;
- Ensure data consistency and lineage by applying robust data quality checks, auditing, and reconciliation logic;
- Collaborate with cross-functional teams to gather data requirements, understand business logic, and translate them into efficient data models and transformations;
- Monitor and troubleshoot SQS/Lambda-based ingestion pipelines, addressing issues related to concurrency, message processing, and data duplication;
- Contribute to the semantic layer design used by BI and reporting tools to ensure consistency and accessibility of business metrics;
- Maintain and evolve dbt models (staging, intermediate, and marts) aligned with software engineering and analytics best practices;
- Drive continuous improvement in data engineering processes and data governance standards, ensuring scalability, maintainability, and security.
Required languages
| English | B2 - Upper Intermediate |
Published 12 May
18 views
Β·
6 applications
Last responded 22 minutes ago
See stats of candidates who applied for this job π
π
$4000-6000
Average salary range of similar jobs in
analytics β
Loading...