Data Engineer

We are working on a US-based data-driven product, building a scalable and cost-efficient data platform that transforms raw data into actionable business insights.

For us, data engineering is not just about moving data — it’s about doing it right: with strong architecture, performance optimization, and automation at the core.

Role Overview

We are looking for a highly analytical and technically strong Data Engineer to design, build, optimize, and maintain scalable data pipelines.

You will be responsible for the architectural integrity of the data platform, ensuring seamless data flow from ingestion to business-ready datasets.

The ideal candidate is an expert in SQL and Python, who understands that great data engineering means:

  • cost efficiency,
  • smart partitioning and modeling,
  • performance optimization,
  • reliable automation.

Technical Requirements

 

Must-Have

  • Expert-Level SQL
    • Complex queries and window functions
    • Query optimization and performance tuning
    • Identifying and fixing bottlenecks
    • Reducing query complexity
  • Python
    • Data manipulation
    • Scripting
    • Building ETL / ELT frameworks
  • AWS Core Infrastructure
    • AWS Kinesis Firehose (near-real-time data streaming)
    • Amazon S3 (data storage)
  • Version Control
    • Git (GitHub / GitLab)
    • Branching strategies
    • Participation in technical code reviews

 

Nice-to-Have

  • Modern Data Stack
    • dbt for modular SQL modeling and documentation
  • Data Warehousing
    • Google BigQuery
    • Query optimization, slot management, cost-efficient querying
  • Advanced Optimization Techniques
    • Partitioning
    • Clustering
    • Bucketing
  • Salesforce Integration
    • Experience integrating Salesforce data into various destinations
  • Docker / ECS
  • AI / ML exposure (a plus)

 

Key Responsibilities

  • Pipeline Architecture
    • Design and implement robust data pipelines using AWS Kinesis and Python
    • Move data from raw sources to the Data Warehouse following best practices
  • Data Modeling
    • Transform raw data into clean, business-ready datasets using dbt
  • Performance Engineering
    • Optimize SQL queries and data structures for high performance and cost efficiency
  • Code Quality
    • Lead and participate in code reviews
    • Ensure high standards for performance, security, and readability
  • Collaboration
    • Work closely with Data Analysts and Product Managers
    • Translate business requirements into scalable data schemas

 

Working Schedule

  • Monday – Friday
  • 16:00 – 00:00 Kyiv time
  • Full alignment with a US-based team and stakeholders

 

What We Value

  • Strong ownership of data architecture
  • Ability to think beyond “just making it work”
  • Focus on scalability, performance, and cost
  • Clear communication with technical and non-technical teams

Required languages

English B2 - Upper Intermediate
Published 23 January
36 views
·
4 applications
25% read
To apply for this and other jobs on Djinni login or signup.
Loading...