Fit4Me

Data Engineer

$$$$
Product
Fit4Me is looking for a Data Engineer whose main goal will be to design, build, and maintain scalable data infrastructure that supports analytics, reporting, and business decision-making across the company.
This role is focused on developing reliable data pipelines, integrating multiple data sources, ensuring data quality, and delivering business-oriented data solutions aligned with product and company goals.

This role is a great fit if you:
  • Work confidently with SQL (BigQuery), Python, dbt, Google Cloud Platform (GCF, GCS, Cloud Build, Cloud Scheduler, Pub/Sub, IAM), and Data Modeling.
  • Enjoy working with data at the infrastructure level and want to build a strong foundation for analytics.
  • Feel comfortable handling large-scale datasets, complex transformations, and multiple data sources at the same time.
  • Can identify potential issues in data and processes before they affect business results.
  • Like working closely with analysts and cross-functional teams to build practical and scalable data solutions.
  • Combine technical depth, systematic thinking, and ownership of results.
Impact Areas:
  • Building and maintaining ELT processes for collecting, transforming, and loading data from multiple sources into BigQuery.
  • Developing and scaling the dbt project for data modeling (stage, base, intermediate, dimension, report layers) used for analytics and reporting.
  • Optimizing pipeline performance, SQL queries, and data models to improve efficiency and reliability.
  • Validating, testing, and ensuring data quality across all stages of the data pipeline.
  • Setting up monitoring and alerting for data pipelines and key infrastructure components.
  • Integrating data from multiple sources (mobile app, finance, marketing), including APIs and external systems.
  • Working closely with analysts to ensure data is well-structured, easy to use, and supports business needs.
  • Maintaining up-to-date documentation for data architecture, ELT processes, and dbt models.
  • Tools youโ€™ll work with: SQL, Python, dbt, and GCP cloud services for building and maintaining data pipelines.
What we expect from you:
  • 3+ years of experience as a Data Engineer or in a related role with a strong focus on pipelines and data modeling.
  • Deep knowledge of SQL: complex queries, optimization, and understanding of Data Warehousing principles.
  • Hands-on experience with BigQuery, Snowflake, Redshift, or other analytical data warehouses.
  • Practical experience with dbt, including testing, documentation, and CI/CD integration.
  • Experience building and maintaining ELT / ETL pipelines.
  • Strong Python skills for integrations, automation, and data processes.
  • Confident use of Git and experience working with GitHub / GitLab.
  • Experience with GCP or other cloud platforms.
  • Strong attention to detail, problem-solving mindset, and ownership of results.
  • Experience working closely with analysts and understanding their data needs.
Nice to have:
  • Experience optimizing cost efficiency and query performance in cloud environments.
  • Experience with real-time / streaming data solutions.
  • Ability to explain technical solutions clearly to business stakeholders.
Hiring process:
Recruiter Interview โ†’ Interview with Product Analyst and Data Engineer โ†’ Final Interview with HRD โ†’ Job Offer ๐Ÿš€

 

Published 28 April
27 views
ยท
3 applications
Connected to ATS
To apply for this and other jobs on Djinni login or signup.
Loading...