Senior Software/Data Engineer (Python + Distributed Systems)

$$$$

EST working hours!!!

CrunchCode — міжнародна сервісна ІТ-компанія з досвідом близько 7 років у розробці вебсервісів і вебзастосунків. Ми працюємо у форматах staff augmentation (outstaff) та outsourcing і підключаємо спеціалістів до проєктів клієнтів у довгостроковій моделі співпраці.

Ми працюємо переважно з проєктами в доменах логістики (включно з last mile),e-commerce, fintech та банкінгу, а також enterprise-рішеннями.
Для нас важливо, щоб проєкт був “чистим” і зрозумілим з точки зору етики та цінності для користувачів.

Ми принципово не беремо проєкти, пов’язані з:
● gambling / гемблінгом,
● adult-контентом та порнографією,
● шахрайством або будь-якою розробкою, що спрямована на обман чи маніпуляції.

What We Offer:
● Fully remote work
● Long-term, stable project
● High level of autonomy and trust
● Minimal bureaucracy
● Direct impact on business-critical logistics systems
● Long-term engagement, not a short-term contract.

Required:
Senior Software / Data Engineer, English B2+
Full-time, EST working hours

Tech Stack:
Python 3.10.x, PySpark, Airflow, Pandas, AWS (ECS, Lambda, SQS, SNS,
ElastiCache, CloudWatch), Delta Lake, Databricks, MySQL (Aurora),
Terraform, Datadog, pytest.
 

Team:
ML Data Engineering — 9 members
2-week sprints: kickoff Monday evenings / demo Friday evenings (EST)

Requirements (Must-have):
- 7+ years of professional software engineering experience.
- Strong proficiency in Python (3.10.x).
- Expertise in large-scale event-driven and distributed system design.
- Strong AWS experience: ECS, Lambda, SQS, SNS, CloudWatch.
- Experience with data processing frameworks: Spark, Databricks.
- Hands-on experience with infrastructure-as-code (Terraform).
- Solid understanding of system performance, profiling, and optimization.
- Experience leading technical projects and mentoring engineers.
- Experience building and maintaining data-intensive backend systems or pipelines.

Responsibilities:
- Lead design and delivery of event-driven, distributed systems for large-scale
 metadata extraction, enrichment, and processing.
- Build and maintain scalable APIs and backend services for high-throughput
 content pipelines on AWS.
- Partner with Data Science, ML Engineering, Infrastructure, and Product teams
 to architect reliable, high-performance systems.
- Provide technical leadership and mentorship across the engineering org.
- Leverage AWS (ECS, Lambda, SQS, SNS, ElastiCache, CloudWatch) to deploy
 resilient production systems.
- Optimize existing systems for scalability, reliability, and performance.
- Ensure system health via monitoring, observability, and automated testing.
- Contribute to engineering strategy — identify gaps, propose initiatives,
 improve existing frameworks.

Nice to Have:
- Experience with Scala or Ruby.
- Experience with Airflow or similar workflow orchestration tools.
- Experience integrating ML or LLM-based models into production systems.
- Bachelor's degree in Computer Science or equivalent.

Hiring Process:
- Intro call
- Technical discussion
- Offer
Start: ASAP

Required languages

English B2 - Upper Intermediate
Ukrainian Native
Published 10 April
17 views
·
1 application
Last responded 5 minutes ago
To apply for this and other jobs on Djinni login or signup.
Loading...