Data Engineer (GCP / BigQuery / Python) to $6100

CrunchCode — міжнародна сервісна ІТ-компанія з досвідом близько 7 років у розробці вебсервісів і вебзастосунків. Ми працюємо у форматах staff augmentation (outstaff) та outsourcing і підключаємо спеціалістів до проєктів клієнтів у довгостроковій моделі співпраці.

Ми працюємо переважно з проєктами в доменах логістики (включно з last mile),e-commerce, fintech та банкінгу, а також enterprise-рішеннями.
Для нас важливо, щоб проєкт був “чистим” і зрозумілим з точки зору етики та цінності для користувачів.

Ми принципово не беремо проєкти, пов’язані з:
● gambling / гемблінгом,
● adult-контентом та порнографією,
● шахрайством або будь-якою розробкою, що спрямована на обман чи маніпуляції.

What We Offer:
● Fully remote work
● Long-term, stable project
● High level of autonomy and trust
● Minimal bureaucracy
● Direct impact on business-critical logistics systems
● Long-term engagement, not a short-term contract.

Required: 
The organisation brings together a multidisciplinary team including data scientists, actuaries, statisticians, business analysts, strategy consultants, engineers, technologists, programmers, and product developers — all focused on leveraging data to drive meaningful, transformational outcomes.

Project Overview
Our client is building data- and AI-driven solutions in a cloud-first environment, with a strong focus on GCP, BigQuery, and scalable data workflows. Their teams work across data science, engineering, and solution design to create systems that can support large-scale, real-world use cases, including voice-related and unstructured data scenarios.

Requirements (Must-have):
Tech Stack
● Python
● SQL
● GCP
● BigQuery
● Data pipelines
● ETL / ELT workflows
● Data modeling
● DBT (nice to have)
● Airflow / Cloud Composer (nice to have)

What We’re Looking For:
● Strong experience with Python for data engineering;
● Strong experience with GCP and cloud-based data workflows;
● Experience building and maintaining production data pipelines;
● Strong SQL and data modeling skills;
● Ability to work with large-scale or unstructured data;
● Experience collaborating with data scientists and cross-functional engineering teams;
● Ability to build practical, reliable solutions in an evolving environment.

Responsibilities:
● Build new data pipelines and adapt existing ETL / ELT workflows
● Move and organize data in cloud storage and analytical systems such as BigQuery
● Work closely with data scientists on tables, transformations, and methodology implementation
● Connect services and data sources required for the solution
● Optimize queries and support scalable data processing
● Work with large, unstructured datasets, including voice-related data
● Help operationalize evolving architecture decisions into production-ready data flows
● Collaborate with engineering and data teams to ensure reliable execution

Nice to Have:
● Experience with DBT
● Experience with Airflow or Cloud Composer
● Experience with large, unstructured, or voice-related datasets
● Lead-level coordination experience
● Experience supporting production data workflows in rapidly evolving projects

Hiring Process:
- Intro call
- Technical discussion (focused on real experience)
- Offer
Start: ASAP

Required languages

English B2 - Upper Intermediate
Ukrainian Native
Python, SQL, GCP, BigQuery, ETL, Data Pipelines, dbt, Airflow
Published 17 March
19 views
·
2 applications
50% read
To apply for this and other jobs on Djinni login or signup.
Loading...