Jobs Data Engineer

147
  • · 86 views · 12 applications · 24d

    Senior Data Engineer (PySpark)

    Full Remote · Worldwide · 6 years of experience · English - B2
    QIT Software is looking for a Data Engineer to a hospitality technology company which running an analytics platform that serves 2,500+ hotels and 500+ restaurants. You will own and operate our AWS data infrastructure - building pipelines, fixing what...

    QIT Software is looking for a Data Engineer to a hospitality technology company which running an analytics platform that serves 2,500+ hotels and 500+ restaurants. You will own and operate our AWS data infrastructure - building pipelines, fixing what breaks, and making the platform more reliable and scalable.


    Project: 
    Hospitality Analytics Platform
    Requirements:
    - 6+ years hands-on data engineering (not architecture diagrams - actual pipelines in production)
    - Strong Spark/PySpark and Python
    - Advanced SQL
    - AWS data stack: EMR, Glue, S3, Redshift (or similar), IAM, CloudWatch
    - Terraform

    Would be a plus:
    - Kafka/Kinesis streaming experience
    - Airflow or similar orchestration
    - Experience supporting BI tools and analytics teams

    Responsibilities:
    - Build and operate Spark/PySpark workloads on EMR and Glue
    - Design end-to-end pipelines: ingestion from APIs, databases, and files → transformation → delivery to analytics consumers
    - Implement data quality checks, validation, and monitoring
    - Optimize for performance, cost, and reliability — then keep it running
    - Work directly with product and analytics teams to define data contracts and deliver what they need
    - Manage infrastructure via Terraform

    Work conditions:
    - The ability to work remotely from anywhere in the world;
    - Flexible work schedule, no micromanagement, no strict deadlines and free overtime work;
    - Work in European and American products with a modern technology stack in different industries (Finance, Technology, Health, Construction, Media, etc.);
    - Revision of wages every year or on an individual basis;
    - Accounting support and full payment of taxes by the company;
    - 100% compensation for remote English lessons;
    - 15 paid leaves (PTO) and public holidays.

     

    More
  • · 48 views · 1 application · 24d

    Data Engineer

    Full Remote · Ukraine · 5 years of experience · English - B2
    Summary 5+ years in data science or data engineering roles; Proficient in Python, SQL, and common data tools (Pandas, Plotly, Streamlit, Dash); Familiarity with large language models (LLMs) and deploying ML in production; This role is NOT focused on BI,...

    Summary

    • 5+ years in data science or data engineering roles;
    • Proficient in Python, SQL, and common data tools (Pandas, Plotly, Streamlit, Dash);
    • Familiarity with large language models (LLMs) and deploying ML in production;
    • This role is NOT focused on BI, data platforms, or research work;
    • Good fit: Hands-on, Python-first Applied AI / GenAI engineers with real delivery ownership and client-facing experience;
    • No fit: Data platform or BI profiles, architecture-heavy or lead-only roles, research-focused profiles, or candidates with only PoC-level GenAI exposure and no ownership.

     

    Role:

    This role is ideal for someone comfortable working throughout the entire pre-sales to delivery lifecycle, rolling up their sleeves to solve complex multi-faceted problems, thrives as a technical communicator, and works well as a key member of a team. 

     

    Requirements:

    • 5+ years in data science or data engineering roles;
    • Proficient in Python, SQL, and common data tools (pandas, Plotly, Streamlit, Dash); 
    • Familiarity with large language models (LLMs) and deploying ML in production;
    • Experience working with APIs and interpreting technical documentation;
    • Client-facing mindset with clear ownership of decisions and outcomes;
    More
  • · 32 views · 0 applications · 25d

    Data Engineer for Shelf Analytics MŁ

    Full Remote · Ukraine · 5 years of experience · English - B2
    Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...
    • Project Description:

      We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.

      As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.
       

    • Responsibilities:

      Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
      Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
      Implement object-oriented Python solutions (classes, inheritance, reusable modules)
      Develop and maintain unit tests to ensure code quality and reliability
      Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
      Create and manage Databricks workflows, clusters, databases, and tables
      Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
      Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
      Support continuous improvement of data engineering standards, performance, and scalability
       

    • Mandatory Skills Description:

      Strong programming skills in Python and PySpark
      Hands-on experience with Databricks (workflows, clusters, tables, databases)
      Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
      Experience with pandas, dbx, and unit testing frameworks
      Practical experience working with Azure Storage (ADLS) and access control (ACLs)
      Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
      Ability to work independently, analyze existing solutions, and propose improvements

    More
  • · 60 views · 13 applications · 25d

    Senior Data Engineer

    Full Remote · Worldwide · 4 years of experience · English - B2
    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start. The role requires: - Databricks certification (mandatory) - Solid hands-on experience with Spark - Strong SQL (Microsoft SQL Server) knowledge The...

    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start.

     

    The role requires:

    - Databricks certification (mandatory)

    - Solid hands-on experience with Spark

    - Strong SQL (Microsoft SQL Server) knowledge

     

    The project involves the migration from Microsoft SQL Server to Databricks, along with data-structure optimization and enhancements.

    More
  • · 63 views · 3 applications · 25d

    Senior Data Engineer

    Full Remote · Bulgaria, Spain, Poland, Portugal, Ukraine · 5 years of experience · English - B1
    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading,...

    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading, contract analysis, and risk detection, partnering with engineering and business stakeholders to take models from concept to production on AWS. 

     

    Requirements 

    • 5+ years of experience in data engineering 
    • 3+ years of hands-on experience building and supporting production ETL/ELT pipelines 
    • Advanced SQL skills (CTEs, window functions, performance optimization) 
    • Strong Python skills (pandas, API integrations) 
    • Proven experience with Snowflake (schema design, Snowpipe, Streams, Tasks, performance tuning, data quality) 
    • Solid knowledge of AWS services: S3, Lambda, EventBridge, IAM, CloudWatch, Step Functions 
    • Strong understanding of dimensional data modeling (Kimball methodology, SCDs) 
    • Experience working with enterprise systems (ERP, CRM, or similar) 

     

    Nice-to-haves 

    • Experience with data quality frameworks (Great Expectations, Deequ) 
    • Knowledge of CDC tools and concepts (AWS DMS, Kafka, Debezium) 
    • Hands-on experience with data lake technologies (Apache Iceberg, Parquet) 
    • Exposure to ML data pipelines and feature stores (SageMaker Feature Store) 
    • Experience with document processing tools such as Amazon Textract 

     

    Core Responsibilities 

    • Design and develop ETL/ELT pipelines using Snowflake, Snowpipe, internal systems, Salesforce, SharePoint, and DocuSign 
    • Build and maintain dimensional data models in Snowflake using dbt, including data quality checks (Great Expectations, Deequ) 
    • Implement CDC patterns for near real-time data synchronization 
    • Manage and evolve the data platform across S3 Data Lake (Apache Iceberg) and Snowflake data warehouse 
    • Build and maintain Medallion architecture data lake in Snowflake 
    • Prepare ML features using SageMaker Feature Store 
    • Develop analytical dashboards and reports in Power BI 

     

    What we offer   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 
    More
  • · 23 views · 1 application · 25d

    Senior Data Engineer

    Full Remote · Ukraine · 6 years of experience · English - B2
    Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...


    Project Description:

    We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.

    As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.

    Responsibilities:

    Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
    Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
    Implement object-oriented Python solutions (classes, inheritance, reusable modules)
    Develop and maintain unit tests to ensure code quality and reliability
    Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
    Create and manage Databricks workflows, clusters, databases, and tables
    Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
    Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
    Support continuous improvement of data engineering standards, performance, and scalability

    Mandatory Skills Description:

    Strong programming skills in Python and PySpark
    Hands-on experience with Databricks (workflows, clusters, tables, databases)
    Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
    Experience with pandas, dbx, and unit testing frameworks
    Practical experience working with Azure Storage (ADLS) and access control (ACLs)
    Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
    Ability to work independently, analyze existing solutions, and propose improvements

    Nice-to-Have Skills Description:

    Experience with retail, CPG, or shelf analytics–related solutions
    Familiarity with large-scale data processing and analytics platforms
    Strong communication skills and a proactive, problem-solving mindset

    Languages:

    English: B2 Upper Intermediate

    More
  • · 17 views · 0 applications · 25d

    Senior Data Engineer

    Full Remote · Ukraine · 6 years of experience · English - None
    Project Description The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly. Solutions are delivered by several Product Teams...

    Project Description
    The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly.

    Solutions are delivered by several Product Teams working on different domains: Customer, Loyalty, Search & Browse, Data Integration, and Cart.

    Current key priorities:

    • New brands onboarding
    • Re-architecture
    • Database migrations
    • Migration of microservices to a unified cloud-native solution without business disruption

    Responsibilities

    • Design data solutions for a large retail company.
    • Support the processing of big data volumes.
    • Integrate solutions into the current architecture.

    Mandatory Skills

    • Microsoft Azure Data Factory / SSIS
    • Microsoft Azure Databricks
    • Microsoft Azure Synapse Analytics
    • PostgreSQL
    • PySpark

    Mandatory Skills Description

    • 3+ years of hands-on expertise with Azure Data Factory and Azure Synapse.
    • Strong expertise in designing and implementing data models (conceptual, logical, physical).
    • In-depth knowledge of Azure services (Data Lake Storage, Synapse Analytics, Data Factory, Databricks) and PySpark for scalable data solutions.
    • Proven experience in building ETL/ELT pipelines to load data into data lakes/warehouses.
    • Experience integrating data from disparate sources (databases, APIs, external providers).
    • Proficiency in data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault).
    • Strong SQL skills: complex queries, transformations, performance tuning.
    • Experience with metadata and governance in cloud data platforms.
    • Certification in Azure/Databricks (advantage).
    • Experience with cloud-based analytical databases.
    • Hands-on with Azure MI, PostgreSQL on Azure, Cosmos DB, Azure Analysis Services, Informix.
    • Experience in Python and Python-based ETL tools.
    • Knowledge of Bash/Unix/Windows shell scripting (preferable).

    Nice-to-Have Skills

    • Experience with Elasticsearch.
    • Familiarity with Docker/Kubernetes.
    • Skills in troubleshooting and performance tuning for data pipelines.
    • Strong collaboration and communication skills.

    Languages

    • English: B2 (Upper Intermediate)
    More
  • · 17 views · 0 applications · 25d

    Senior Data Platform Architect

    Full Remote · Ukraine · 5 years of experience · English - None
    We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the...

    We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the platform effectively within an IT ecosystem.

    Responsibilities:

    • Manage and optimize data platforms (Databricks, Palantir).
    • Ensure high availability, security, and performance of data systems.
    • Provide valuable insights about data platform usage.
    • Optimize computing and storage for large-scale data processing.
    • Design and maintain system libraries (Python) used in ETL pipelines and platform governance.
    • Optimize ETL Processes – Enhance and tune existing ETL processes for better performance, scalability, and reliability.

    Mandatory Skills Description:

    • Minimum 10 Years of experience in IT/Data.
    • Minimum 5 years of experience as a Data Platform Engineer/Data Engineer.
    • Bachelor's in IT or related field.
    • Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).
    • Data Platform Tools: Any of Palantir, Databricks, Snowflake.
    • Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
    • SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
    • Data Warehousing: Experience working with data warehousing concepts and platforms, ideally Databricks.
    • ETL Tools: Familiarity with ETL tools & processes
    • Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
    • Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
    • Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
    • Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.

    Nice-to-Have Skills Description:

    • Containerization & Orchestration: Docker, Kubernetes.
    • Infrastructure as Code (IaC): Terraform.
    • Understanding of Investment Data domain (desired).

    Languages:

    English: B2 Upper Intermediate

    More
  • · 49 views · 5 applications · 26d

    Database Engineer

    Full Remote · Ukraine, Poland, Hungary · Product · 5 years of experience · English - None
    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and...

    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and applications.

     

    What You’ll Do

     

    • Design, build, and maintain ETL/ELT pipelines and data workflows (e.g., Azure Data Factory, Databricks, Spark, ClickHouse, Airflow, etc.).
    • Develop and optimize data models, data warehouse/lake/lakehouse schema (partitioning, indexing, clustering, cost/performance tuning, etc.).
    • Build scalable batch and streaming processing jobs (Spark/Databricks, Delta Lake; Kafka/Event Hubs a plus).
    • Ensure data quality, reliability, and observability (tests, monitoring, alerting, SLAs).
    • Implement CI/CD and version control for data assets and pipelines.
    • Secure data and environments (IAM/Entra ID, Key Vault, strong tenancy guarantees, encryption, least privilege).
    • Collaborate with application, analytics, and platform teams to deliver trustworthy, consumable datasets.

     

    Required Qualifications

     

    • ETL or ELT experience required (ADF/Databricks/dbt/Airflow or similar).
    • Big data experience required.
    • Cloud experience required; Azure preferred (Synapse, Data Factory, Databricks, Azure Storage, Event Hubs, etc.).
    • Strong SQL and performance tuning expertise; hands-on with at least one warehouse/lakehouse (Synapse/Snowflake/BigQuery/Redshift or similar).
    • Solid data modeling fundamentals (star/snowflake schemas, normalization/denormalization, CDC, etc.).
    • Experience with CI/CD, Git, and infrastructure automation basics.

     

    Nice to Have

     

    • Streaming pipelines (Kafka, Event Hubs, Kinesis, Pub/Sub) and exactly-once/at-least-once patterns.
    • Orchestration and workflow tools (Airflow, Prefect, Azure Data Factory).
    • Python for data engineering.
    • Data governance, lineage, and security best practices.
    • Infrastructure as Code (Terraform) for data platform provisioning.
    More
  • · 62 views · 4 applications · 26d

    Senior Data Engineer/NextFlow Engineer

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    Meet the YozmaTech YozmaTech isn’t just another tech company – we’re a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster. We build dedicated development teams across 10+ countries,...

    Meet the YozmaTech

    YozmaTech isn’t just another tech company – we’re a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster.
    We build dedicated development teams across 10+ countries, creating strong, long-term partnerships based on trust, transparency, and real impact.


    Here, every idea counts. We value people who are proactive, open-minded, and ready to grow. If you’re passionate about building meaningful products and want to join a team that feels like family – you’ll feel right at home with us.

     

    Our client we’re seeking a skilled Data Engineer / Software Developer with deep experience in building, maintaining and optimizing reproducible data processing pipelines for large-scale scientific data (bioinformatics, genomics, proteomics, or related domains). The successful candidate will bridge computational engineering best practices with biological data challenges, enabling scientists to move from raw data to reliable insights at scale.

     

    You will work with interdisciplinary teams of researchers, computational scientists, and domain
    experts to design end-to-end workflows, ensure data quality and governance, and implement
    infrastructure that powers scientific discovery.


    Prior experience with Nextflow or similar workflow systems is strongly preferred.

     

    Key Requirements:
    🔹 Experience with NextFlow and bioinformatics pipelines;
    🔹 Strong programming skills in languages such as Python;
    🔹 Experience with data processing and pipeline development;
    🔹 Familiarity with Linux environments, cloud computing workflows;

     

    Domain Experience:
    🔹 Prior work in scientific data environments or life sciences research (genomics/proteomics/high-throughput data) is highly desirable;

     

    Soft Skills:
    🔹 Strong problem-solving, communication, and organization skills; ability to manage multiple projects and deliverables;
    🔹 Comfortable collaborating with researchers from biological, computational, and engineering disciplines;
    🔹 English – Upper-Intermediate or higher.

     

    Will be plus:
    🔹 Experience with cloud-based infrastructure and containerization (e.g., Docker);
    🔹 Familiarity with AI and machine learning concepts;
    🔹 Experience with agile development methodologies and version control systems (e.g., Git);

     

    What you will do:
    🔹 Design, develop, and maintain high-performance, portable data pipelines using NextFlow;
    🔹 Collaborate with data scientists and researchers to integrate new algorithms and features into the pipeline;
    🔹 Ensure the pipeline is scalable, efficient, and well-organized;
    🔹 Develop and maintain tests to ensure the pipeline is working correctly;
    🔹 Work with the DevOps team to deploy and manage the pipeline on our infrastructure;
    🔹 Participate in design meetings and contribute to the development of new features and algorithms;

     

    Interview stages:
    🔹 HR Interview;
    🔹 Technical Interview;
    🔹 Reference Check;
    🔹 Offer;

     

    Why Join Us?

    At YozmaTech, we’re self-starters who grow together. Every day, we tackle real challenges for real products – and have fun doing it. We work globally, think entrepreneurially, and support each other like family. We invest in your growth and care about your voice. With us, you’ll always know what you’re working on and why it matters.
    From day one, you’ll get:
    🔹 Direct access to clients and meaningful products;
    🔹 Flexibility to work remotely or from our offices;
    🔹 A-team colleagues and a zero-bureaucracy culture;
    🔹 Opportunities to grow, lead, and make your mark.

     

    After you apply

    We’ll keep it respectful, clear, and personal from start to offer.
    You’ll always know what project you’re joining – and how you can grow with us.

    Everyone’s welcome

    Diversity makes us better. We create a space where you can thrive as you are.

    Ready to build something meaningful?

    Let’s talk. Your next big adventure might just start here.

    More
  • · 78 views · 14 applications · 29d

    Senior Data Engineer to $6000

    Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · English - B2
    Job Description Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions Proficiency in Python and SQL for building ingestion, transformation, and processing...

    Job Description

    • Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions
    • Proficiency in Python and SQL for building ingestion, transformation, and processing workflows
    • Clear understanding of Lakehouse architecture principles, Delta Lake patterns, and modern data warehousing
    • Practical experience building config-driven ETL/ELT pipelines, including API integrations and Change Data Capture (CDC)
    • Working knowledge of relational databases (MS SQL, PostgreSQL) and exposure to NoSQL concepts
    • Ability to design data models and schemas optimized for analytics and reporting workloads
    • Comfortable working with common data formats: JSON, Parquet, CSV
    • Experience with CI/CD automation for data workflows (GitHub Actions, Azure DevOps, or similar)
    • Familiarity with data governance practices: lineage tracking, access control, encryption
    • Strong problem-solving mindset with attention to detail
    • Clear written and verbal communication for async collaboration

       

    Nice-to-Have

    • Proficiency with Apache Spark using PySpark for large-scale data processing
    • Experience with Azure Service Bus/Event Hub for event-driven architectures
    • Familiarity with machine learning and AI integration within data platform context (RAG, vector search, Azure AI Search)
    • Data quality frameworks (Great Expectations, dbt tests)
    • Experience with Power BI semantic models and Row-Level Security

       

    Job Responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Azure Data Factory, Synapse, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for batch and API-based data ingestion
    • Build Medallion architecture layers (Bronze → Silver → Gold) ensuring efficient, reliable, and performant data processing
    • Ensure data governance, lineage, and compliance using Azure Key Vault and proper access controls
    • Collaborate with developers and business analysts to deliver trusted datasets for reporting, analytics, and AI/ML use cases
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Implement cross-system identity resolution (global IDs, customer/property keys across multiple platforms)
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

       

    Why TeamCraft?

    • Greenfield project - build architecture from scratch, no legacy debt
    • Direct impact - your pipelines power real AI products and business decisions
    • Small team, big ownership - no bureaucracy, fast iteration, your voice matters
    • Stable foundation - US-based company, 300+ employees
    • Growth trajectory - scaling with technology as the driver

     

    About the Project

    TeamCraft is a large U.S. commercial roofing company undergoing an ambitious AI transformation. We’re building a centralized data platform from scratch - a unified Azure Lakehouse that integrates multiple operational systems into a single source of truth (Bronze -> Silver -> Gold).

    This is greenfield development with real business outcomes - not legacy maintenance.

    More
  • · 33 views · 1 application · 29d

    Senior Data Engineer

    Full Remote · Ukraine · 3 years of experience · English - C1
    We are looking for a Senior Data Engineer for staff augmentation Must-have: Snowflake / SQL AWS stack (S3, Glue, Lambda, CloudWatch, IAM) Python Terraform (IaC) English: C1+ Nice to have: Досвід роботи з REST APIs Airflow для...

    We are looking for a Senior Data Engineer for staff augmentation

    Must-have:

    Snowflake / SQL 

    AWS stack (S3, Glue, Lambda, CloudWatch, IAM)

    Python

    Terraform (IaC)

    English: C1+ 

    Nice to have:

    Досвід роботи з REST APIs

    Airflow для оркестрації

    CircleCI

    JavaScript

    Apache Kafka або Hadoop

    More
  • · 197 views · 17 applications · 29d

    Data Engineer

    Full Remote · Ukraine · 1 year of experience · English - B2
    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for...

    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for industry leaders and Fortune 500 companies. Our expertise spans cloud, data, AI/ML, embedded software, IoT, and more, driving digital transformation across finance, manufacturing, telecom, healthcare, and other industries. Join N-iX and become part of a team where your ideas make a real impact.

     

    This role is ideal for someone at the beginning of their data engineering career who wants to grow in a supportive environment. We value curiosity, a learning mindset, and the ability to ask good questions. If you’re motivated to develop your skills and become a strong Data Engineer over time, we’d be happy to help you grow with us 🚀



    Responsibilities

    • Support the implementation of business logic in the Data Warehouse under the guidance of senior engineers
    • Assist in translating business requirements into basic data models and transformations
    • Help develop, maintain, and monitor ETL pipelines using Azure Data Factory
    • Participate in data loading, validation, and basic query performance optimization
    • Work closely with senior team members and customer stakeholders to understand requirements and data flows
    • Contribute to documentation and follow best practices in data engineering and development
    • Gradually propose improvements and ideas as experience grows

       

    Requirements

    • Up to 1,5 years of experience in Data Engineering
    • Basic hands-on experience with SQL and strong willingness to work with it as a core skill
    • Familiarity with Microsoft Azure or strong motivation to learn Azure-based data solutions
    • Understanding of relational databases and fundamentals of data modeling
    • Ability to write clear and maintainable SQL queries
    • Basic experience with version control systems (e.g. Git)
    • Interest in data warehousing and analytical systems
    • Familiarity with Agile ways of working (through coursework, internships, or first commercial experience)
    • Strong analytical thinking and eagerness to learn from more experienced colleagues

       

    Nice to Have

    • Exposure to Azure Data Factory, dbt, or similar ETL tools
    • Basic knowledge of Databricks
    • Understanding of Supply Chain & Logistics concepts
    • Any experience working with SAP data (MM or related modules)
       
    More
  • · 88 views · 16 applications · 30d

    Lead\Architect Data Engineer

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field
    • At least 1+ year of experience as Lead\Architect
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications

    Ready to try your hand? Send your CV without a doubt!

    More
  • · 33 views · 8 applications · 30d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 4.5 years of experience · English - B2
    This is a short-term engagement (around 2 months) with potential to extend Role Overview We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building...

    This is a short-term engagement (around 2 months) with potential to extend
    Role Overview

    We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building reliable ETL, enforcing data quality, wiring orchestration, and making the platform observable, testable, and documented.

    You will work with production databases, APIs, Airflow, dlt, dbt, and a cloud data warehouse. The goal is to deliver data that is correct, incremental, tested, and explainable.

    Responsibilities:
    1. Key Deliverables.
    2. Transformation & Analytics (dbt).
    3. Data Quality & Testing.
    4. Documentation & Enablement.

    Required Skills & Experience:
    1. Strong experience building production ETL/ELT pipelines.

    2. Hands-on experience with dlt (or similar modern ingestion tools).

    3. Solid dbt experience (models, tests, docs).

    4. Experience with Airflow or similar workflow orchestrators.

    5. Strong SQL skills and understanding of data modeling.

    6. Experience working with large, incremental datasets.

    7. Good knowledge of Python.
    8. High English level  - B2+.

    Nice to Have

    1. Experience with fintech or high-volume transactional data.

    2. Familiarity with CI-based data testing.

    3. Experience publishing internal data catalogs or documentation portals.

    Interview stages:
    1. Interview with a recruiter.
    2.Tеchnical  Interview.
    3. Reference check.
    4. Offer.

    What We Offer:
    Full-time role with flexible hours after probation.
    Ongoing training and educational opportunities.
    Performance reviews every 6 months.
    Competitive salary in USD.
    21 paid vacation days.
    7 paid sick days (+15 for serious cases like COVID or surgery).
    10 floating public holidays.
    Online team-building events & fun corporate activities.
    Projects across diverse domains (e-commerce, healthcare, fintech, etc.).
    Clients from the USA, Canada, and Europe.
     

    More
Log In or Sign Up to see all posted jobs