Senior Data Engineer (Python, Spark, AWS) - Pan-Baltic Banking

to $5500

CrunchCode — міжнародна сервісна ІТ-компанія з досвідом близько 7 років у розробці вебсервісів і вебзастосунків. Ми працюємо у форматах staff augmentation (outstaff) та outsourcing і підключаємо спеціалістів до проєктів клієнтів у довгостроковій моделі співпраці.

Ми працюємо переважно з проєктами в доменах логістики (включно з last mile),e-commerce, fintech та банкінгу, а також enterprise-рішеннями.
Для нас важливо, щоб проєкт був “чистим” і зрозумілим з точки зору етики та цінності для користувачів.

Ми принципово не беремо проєкти, пов’язані з:
● gamblng / гемблінгом,
● adult-контентом та порнографією,
● шахрайством або будь-якою розробкою, що спрямована на обман чи маніпуляції.

What We Offer:
● Fully remote work
● Long-term, stable project
● High level of autonomy and trust
● Minimal bureaucracy
● Direct impact on business-critical logistics systems
● Long-term engagement, not a short-term contract.

Project Overview
Enterprise-grade data engineering role within a Pan-Baltic banking environment. The engineer will work on large-scale data platforms, building and maintaining production-ready pipelines and modern data lakehouse architectures across the organization.

Tech Stack
Python, Apache Spark, AWS (EMR, Glue, Athena, S3), SQL, Data Lakehouse, ETL/ELT

Requirements (Must-have):
- Strong hands-on enterprise data engineering experience
- Advanced SQL skills and solid understanding of analytical and dimensional data modeling
- Proven experience with Data Lakehouse / modern data platform concepts (cloud object storage, open table formats, distributed processing)
- Strong hands-on experience with AWS data services: EMR, Glue, Athena, S3-based data lakes
- Strong hands-on experience with Apache Spark for large-scale data processing
- Strong Python skills for ETL development, data processing, and automation
- Very strong analytical, problem-solving, and system-level thinking skills
- Fluent English B2+ — required for Pan-Baltic communication (spoken and written)

Responsibilities:
- Design, implement, and maintain robust production data pipelines
- Build and optimize Data Lakehouse and modern data platform solutions
- Work with AWS data services across the full data engineering stack
- Process large-scale data using Apache Spark
- Develop ETL workflows and automation scripts in Python
- Apply advanced SQL and dimensional/analytical data modeling
- Contribute to system-level thinking and data architecture decisions

Nice to Have:
- Experience in banking or financial services
- Familiarity with data governance and compliance practices

Hiring Process:
Intro call
Technical discussion
Offer
Start: ASAP

Required languages

English B2 - Upper Intermediate
Ukrainian Native
Published 22 April
31 views
·
1 application
Response activity: Medium
Last responded yesterday
To apply for this and other jobs on Djinni login or signup.
Loading...