Jobs Data Engineer

143
  • · 73 views · 10 applications · 19d

    Data Engineering Lead

    Full Remote · Worldwide · Product · 5 years of experience · English - None
    About Traffic Label Traffic Label is a performance marketing and technology company with nearly two decades of experience driving engagement and conversion across the iGaming and digital entertainment sectors. We’re now building a Customer Data Platform...

    About Traffic Label
    Traffic Label is a performance marketing and technology company with nearly two decades of experience driving engagement and conversion across the iGaming and digital entertainment sectors.
    We’re now building a Customer Data Platform (CDP) on Snowflake and AWS - unifying player data across multiple brands to power automation, insights, and personalization.

    The Role
    We’re looking for a Data Engineering Lead to own the technical delivery and development of this platform. You’ll architect scalable pipelines, lead a small team, and ensure data reliability, accuracy, and performance.
    Team size: 3–4 engineers/analysts

    Key Responsibilities

    • Design and implement scalable data pipelines processing millions of events daily
    • Own Snowflake data warehouse architecture, optimization, and cost control
    • Lead the engineering team through delivery and performance improvements
    • Ensure >95% data accuracy and 99.9% pipeline uptime
    • Collaborate with marketing, analytics, and compliance teams to align data with business goals

    Requirements

    • 5+ years in data engineering, 2+ in leadership roles
    • Expert in Snowflake, SQL, and Python
    • Proficient with AWS (S3, Lambda, IAM) and orchestration tools (Airflow, dbt, etc.)
    • Strong understanding of data governance, cost optimization, and performance tuning
    • Experience with iGaming data, Kafka/Kinesis, or MLflow is a plus

    Why Join Us

    • Build a core data platform from the ground up
    • Competitive salary and performance bonuses
    • Flexible remote or hybrid work across Europe
    • Supportive, innovative, data-driven culture

    Ready to lead a data platform that powers smarter decisions across global iGaming brands?
    Apply now to join Traffic Label’s Data & Technology team.

    More
  • · 120 views · 23 applications · 19d

    Data Engineer

    Full Remote · Worldwide · 3 years of experience · English - B2
    Digis is looking for a Data Engineer to join a long-term partnership with a global leader in the hospitality SaaS industry. About the Project The client is a US-based company providing a SaaS platform for upselling training and analytics. The project is...

    Digis is looking for a Data Engineer to join a long-term partnership with a global leader in the hospitality SaaS industry.
     

    About the Project

    The client is a US-based company providing a SaaS platform for upselling training and analytics. The project is in an active development stage and focuses on processing massive volumes of data from global hotel brands to deliver measurable business value and revenue growth.
     

    Responsibilities

    • Data Processing: Extract and process large-scale data.
    • ETL Management: Run, debug, and optimize ETL jobs on AWS Glue or EMR, monitoring via Spark UI.
    • Backend Integration: Split existing APIs into multiple microservices and scale case detection processes (scanners).
    • Database Optimization: Write and optimize complex SQL queries for PostgreSQL/Redshift and use DynamoDB as a data source/sink.
    • Environment Support: Emulate cloud environments (Pub/Sub, Cloud Run) locally for testing and implement AWS ECS Fargate workflows.
       

    Requirements

    • 3+ years of experience as a Data Engineer.
    • 1+ year of hands-on production experience with Spark / PySpark.
    • 1+ year of experience with AWS (S3, Glue, EMR, DynamoDB, ECS) in recent projects.
    • English: Upper-Intermediate, capable of participating in frequent technical meetings and direct collaboration with the US-based CTO.
       

    Why Join

    • Global Impact: Work on a platform used in 124 countries that manages billions in revenue for leading global hotel brands.
    • End-to-End Ownership: Take full responsibility for the data pipeline, from raw data ingestion to production-ready models.
    • Stability: A long-term project with a clear roadmap and a history of successful cooperation.
       

    We Offer

    • 20 paid vacation days per year
    • 5 paid sick leaves per year (no medical documents required)
    • Personalized development plan with training compensation
    • English courses compensation
    • Work equipment if needed (PC / laptop / monitor)
    • Flat and transparent internal communication
    • Opportunity to switch between projects and technologies within Digis
    • Full accounting and legal support
    More
  • · 55 views · 0 applications · 19d

    Data Engineer (Relocate)

    Office Work · Spain · Product · 3 years of experience · English - B1 Ukrainian Product 🇺🇦
    We are the creators of a new fintech era! Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission — to shape the new world order in the...

    We are the creators of a new fintech era!
    Our mission is to change this world by making blockchain accessible to everyone in everyday life. WhiteBIT is a global team of over 1,200 professionals united by one mission — to shape the new world order in the Web3 era. Each of our employees is fully engaged in this transformative journey.
    We work on our blockchain platform, providing maximum transparency and security for more than 8 million users worldwide. Our breakthrough solutions, incredible speed of adaptation to market challenges, and technological superiority are the strengths that take us beyond ordinary companies. Our official partners include the National Football Team of Ukraine, FC Barcelona, Lifecell, FACEIT and VISA.

    The future of Web3 starts with you: join us as a Data Engineer!

     

    Requirements

    — 3+ years of experience as a Data Engineer in high-load or data-driven environments
    — Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    — Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    — Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    — Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    — Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    — Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    — Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    — Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    — English — at least intermediate (for documentation & communication with tech teams)

     

    Responsibilities

    — Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    — Develop and support data lake/data warehouse architectures
    — Integrate internal and external data sources/APIs into unified data systems
    — Ensure data quality, observability, and monitoring of pipelines
    — Collaborate with backend and DevOps engineers on infrastructure and deployment
    — Optimize query performance and data processing latency across systems
    — Maintain documentation and contribute to internal data engineering standards
    — Implement data access layers and provide well-structured data for downstream teams

     

    Work conditions

    Immerse yourself in Crypto & Web3:
    — Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    — Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    — Gain unique experience and be part of global transformations.
    Drive Innovations:
    — Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    — Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    — Modern equipment.
    — Comfortable working conditions, and an inspiring environment to help you thrive.
    — 22 business days of paid leave.
    — Additional days off for national holidays.

    More
  • · 62 views · 7 applications · 19d

    Senior Data Platform Engineer

    Full Remote · Countries of Europe or Ukraine · Product · 8 years of experience · English - B2
    Position Summary: We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are...

    Position Summary:

    We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are flexible with considering applications from anywhere in Europe.

    More details: crystalblockchain.com

    Duties and responsibilities:

    • Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within Crystal's platform;
    • Active participation in development and maintenance of our data pipelines and backend services;
    • Integrate new technologies into our processes and tools;
    • End-to-end feature designing and implementation;
    • Code, debug, test and deliver features and improvements in a continuous manner;
    • Provide code review, assistance and feedback for other team members.


    Required:

    • 8+ years of experience developing Python backend services and APIs;
    • Advanced knowledge of SQL - ability to write, understand and debug complex queries;
    • Data Warehousing and database basic architecture principles;
    • POSIX/Unix/Linux ecosystem knowledge;
    • Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI;
    • Knowledge about blockchain technologies or willingness to learn;
    • Experience with PostgreSQL database system;
    • Knowledge of Unit Testing principles;
    • Experience with Docker containers and proven ability to migrate existing services;
    • Independent and autonomous way of working;
    • Team-oriented work and good communication skills are an asset.


    Would be a plus:

    • Practical experience in big data and frameworks – Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse;
    • Knowledge of Kubernetes and Infrastructure as Code – Terraform and Ansible;
    • Passion for Bitcoin and Blockchain technologies;
    • Experience with distributed systems;
    • Experience with opensource solutions;
    • Experience with Java or willingness to learn.
    More
  • · 65 views · 11 applications · 16d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 5 years of experience · English - B2
    On behalf of our client, we are looking for a Senior Data Engineer to strengthen the data team by building and optimizing data pipelines for the IN-Gauge SaaS platform used by global hotel chains Responsibilities: - Build and optimize high-volume...

    On behalf of our client, we are looking for a Senior Data Engineer to strengthen the data team by building and optimizing data pipelines for the IN-Gauge SaaS platform used by global hotel chains

     

    Responsibilities:

     

    - Build and optimize high-volume ETL/ELT pipelines for hospitality data

    - Support ongoing architectural improvements of the data platform

    - Work with Spark/PySpark in production environments

    - Collaborate with the team to ensure reliable data processing and delivery

     

    Requirements:

     

    - 5+ years of experience as a Data Engineer

    - 2+ years of production experience with Spark / PySpark

    - 2+ years of hands-on experience with AWS (latest project on AWS)

    - Experience building and supporting ETL/ELT pipelines in production

    - Upper - Intermediate or higher level of English

     

    Nice to Have:

     

    - Kafka / Kinesis, DynamoDB, Fargate / Step Functions

    - Airflow or batch orchestration tools

    - Lake Formation / Athena

    - Experience with big data migrations

    - Ability to read Spark UI and troubleshoot OOM issues

     

    Company offers:

     

    - Long-term employment with possibilities for professional growth

    - Fully remote work

    - Reasonably flexible schedule

    - 15 days of paid vacation

    - Regular performance reviews

    More
  • · 48 views · 5 applications · 18d

    Power Platform Consultant / Automation Specialist

    Full Remote · Ukraine · 3 years of experience · English - B2
    Must-have skills (top 3) Power Automate (design and implementation of productive workflows) Power Apps (canvas/integration with processes) Power BI (basic understanding of reporting and data models) Experience At least 3 years, ideally 5+ years with...

    Must-have skills (top 3)
    Power Automate (design and implementation of productive workflows)
    Power Apps (canvas/integration with processes)
    Power BI (basic understanding of reporting and data models)


    Experience
    At least 3 years, ideally 5+ years with Power Automate & Power Apps
    Experience with business process automation (workflows, email automation, approvals, etc.)
    Consulting on the optimal use of Power Automate in the company
    Ideally, initial exposure to AI integrations (e.g., AI Builder, Copilot, external APIs)


    Nice-to-have
    Supply chain context
    SAP as source system
    Snowflake

    Industry: Pharmaceuticals/manufacturing


    Language
    English: very good written and spoken
    German: very good desirable, but not essential

     

    We look forward to you application, CV and project experience description!

    More
  • · 60 views · 6 applications · 17d

    Senior Analytics Engineer (DBT)

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    We are seeking a hands-on Data and BI expert to take ownership of the client's internal data modelling stack. If you are proficient in SQL and DBT, this role may be a great fit for you. Hiring stages: - HR interview - Technical interview - Exam -...

    We are seeking a hands-on Data and BI expert to take ownership of the client's internal data modelling stack. If you are proficient in SQL and DBT, this role may be a great fit for you.

    Hiring stages:
    - HR interview 
    - Technical interview
    - Exam
    - Reference check

    About a project:
    Platform delivers an AI-powered command center for tracking KPIs across teams, turning raw data into actionable insights without complex setups. This Israel-based SaaS platform automates performance analysis, supports HR metrics, and integrates seamlessly with over 50 tools for real-time dashboards and alerts.

    Required Qualifications:
    - 7+ years in a Data Analyst, BI Developer, Analytics Engineer, or similar role
    - Expert-level SQL skills with deep experience in PostgreSQL
    - 4+ years of production DBT development (including model structure, tests, and deployment)
    - Highly proficient with DBT (Data Build Tool) - is a must
    - Upper Intermediate English level and higher
    - Solid understanding of Git-based workflows and CI/CD for analytics code
    - Detail-oriented, independent, and confident in communicating technical decisions

    Nice-to-Have:
    - Experience with modern cloud data warehouses (e.g. Snowflake, BigQuery, Redshift)
    - Familiarity with ETL & orchestration tools (e.g. Airbyte, Fivetran)
    - Understanding of data governance, data cataloguing, and metadata management
    - Comfortable working in high-growth startup environments with evolving systems and priorities

    Key Responsibilities
    - Design, build, and maintain modular DBT models powering customer-facing KPIs
    - Define and implement data modelling best practices, including testing, documentation, and deployment
    - Review and optimise complex data pipelines with a focus on performance and clarity
    - Monitor and improve PostgreSQL performance, indexing, and schema structure
    - Debug and troubleshoot issues across the entire data flow—from source connectors to dashboards
    - Collaborate closely with product and engineering to support rapid iteration and insights delivery
     

    More
  • · 41 views · 4 applications · 4d

    Senior QA Big Data

    Full Remote · Ukraine · 5 years of experience · English - B2
    N-iX is a global software development company founded in 2002, connecting over 2,400+ tech professionals across 40+ countries. We deliver innovative technology solutions in cloud computing, data analytics, AI, embedded software,IoT, and more to global...

    N-iX is a global software development company founded in 2002, connecting over 2,400+ tech professionals across 40+ countries. We deliver innovative technology solutions in cloud computing, data analytics, AI, embedded software,IoT, and more to global industry leaders and Fortune 500 companies. Join us to create technology that drives real change for businesses and people across the world.

    Our client is involved in a large-scale Data Transformation project, with a focus on solidifying the foundation of their data operations. They are aiming to ensure that data is accurate, consistent, and available at critical times to support their business needs. 

     

    Responsibilities:

    • Design and execute test strategies for real-time streaming workflows (Structured Streaming, Azure Event Hub, Topics & Queues).
    • Validate data products transitioning from monolithic to microservices architecture.
    • Perform API testing for data services and orchestration layers.
    • Implement observability checks using Prometheus & Grafana for monitoring pipeline health and performance.
    • Collaborate with DevOps teams to ensure Kubernetes-based deployments meet quality standards.
    • Conduct functional, integration, and performance testing for Azure Functions and orchestration workflows.
    • Ensure compliance with data governance and SLA optimization initiatives.
    • Document test cases, results, and maintain traceability in tools like Jira/Confluence.

       

    Requirements:

    • QA experience in data engineering and real-time data platforms.
    • Strong knowledge of streaming technologies (Structured Streaming, Kafka, Azure Event Hub).
    • Hands-on experience with API testing (Postman, REST Assured).
    • Proficiency in SQL and basic scripting (Python/PySpark preferred).
    • Experience with Azure Cloud services (Azure Functions, ADF, Event Hub).
    • Familiarity with Kubernetes deployments and observability tools (Prometheus, Grafana).
    • Knowledge of orchestration frameworks (AirFlow preferably) and CI/CD pipelines.

       

    Nice-to-Have Skills:

    • Exposure to Iceberg and ClickHouse.
    • Understanding of Kappa Architecture and Data Mesh principles.
    • Experience with data product testing in microservices environments.
    • Familiarity with BI tools (Power BI, Qlik).

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • · 32 views · 1 application · 17d

    Senior Data Engineer

    Full Remote · Poland · 7 years of experience · English - B2
    Job Description Total of 7+ years of development/design experience with a minimum of 5 years of experience in Big Data technologies on-prem or on cloud. Experience with architecting, building, implementing, and managing Big Data platforms On Cloud,...

    Job Description

    • Total of 7+ years of development/design experience with a minimum of 5 years of experience in Big Data technologies on-prem or on cloud.
    • Experience with architecting, building, implementing, and managing Big Data platforms On Cloud, covering ingestion (Batch and Real-time), processing (Batch and Real-time), Polyglot Storage, Data Analytics, and Data Access
    • Good understanding of Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog
    • Proven understanding and demonstrable implementation experience of big data platform technologies on cloud (AWS and Azure), including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc.
    • Experience working with Enterprise Data Warehouse technologies, Multi-Dimensional Data Modeling, Data Architectures or other work related to the construction of enterprise data assets
    • Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring
    • Experience building stream-processing systems, using solutions such as Apache Spark, Databricks, Kafka etc.
    • Experience with Spark/Databricks technology is a must
    • Experience with Big Data querying tools
    • Solid skills in Python
    • Strong experience with data modelling and schema design
    • Strong SQL programming background
    • Excellent interpersonal and teamwork skills
    • Experience to drive solution/enterprise-level architecture, collaborate with other tech leads
    • Strong problem solving, troubleshooting and analysis skills
    • Experience working in a geographically distributed team
    • Experience with leading and mentorship of other team members
    • Good knowledge of Agile Scrum
    • Good communication skills

     

    Job Responsibilities

    • Work directly with the Client teams to understand the requirements/needs and rapidly prototype data and analytics solutions based upon business requirements
    • Design, implement, and manage large-scale data platform/applications, including ingestion, processing, storage, data access, data governance capabilities and related infrastructure
    • Support design and development of solutions for the deployment of data analytics notebooks, tools, dashboards and reports to various stakeholders
    • Communication with Product/DevOps/Development/QA team
    • Architect data pipelines and ETL/ELT processes to connect with various data sources
    • Design and maintain enterprise data warehouse models
    • Take part in the performance optimization processes
    • Guide on research activities (PoC) if necessary
    • Manage cloud based data & analytics platform
    • Establishing best practices with CI\CD under BigData scope
    More
  • · 65 views · 6 applications · 16d

    Senior Data Engineer

    Full Remote · Ukraine · 5 years of experience · English - B2
    Our client is a legal tech startup that focuses on AI and machine learning, specifically building chatbots to answer legal questions for lawyers. They are looking for a Senior Data Engineer for a high-impact project: digitizing law in Morocco and Africa...

    Our client is a legal tech startup that focuses on AI and machine learning, specifically building chatbots to answer legal questions for lawyers. They are looking for a Senior Data Engineer for a high-impact project: digitizing law in Morocco and Africa and creating the first AI-quarriable legal knowledge base.

    Their ambition is to build a platform capable of answering legal questions in a reliable, well-sourced, and traceable way, based on a massive corpus of heterogeneous legal documents.

     

    🚀 Why this project is different
    You will join a true “knowledge infrastructure” mission:

    • Contribute to making the law more accessible
    • Build a durable asset: a structured database of Moroccan law (in French), extensible to Africa
    • Work on a concrete and deep technical challenge: transforming unstructured data into exploitable, reliable, and maintainable data at scale

     

    Required skills:

    • 3+ years of experience in Data Engineering and/or applied Document AI / NLP
    • Strong proficiency in Python
    • Hands-on experience with unstructured documents: PDF parsing, OCR, cleaning, structuring
    • Used to delivering to production: robust pipelines, observability, quality, performance
       

    🛠 Stack/skills (indicative)

    • Storage: AWS
    • Document processing: OCR/parsing tools, text preprocessing pipelines
    • Testing & quality: metrics, sampling, automated validatio


    Nice to have

    • Experience with legal / regulatory corpora or high-precision content
    • Familiarity with multilingual issues and encoding
    • Basic knowledge of downstream needs (vector DBs, retrieval, citation)

     

    Scope of work:

    You will be responsible for the “documents → structured data” pipeline that will feed our AI (RAG) engine.

     

    At the core of the role (technical focus)
    Build a structured database of Moroccan law in French from highly heterogeneous data:

    • PDFs (text-based and scanned), Word files, images, text files, sometimes noisy or incomplete
    • Text extraction (parsing + OCR when needed), cleaning
    • Structuring: detection of titles/chapters/sections/articles, hierarchy, normalization
    • Intelligent chunking (based on legal structure rather than arbitrary size), with traceability (source, page, identifiers)
    • Metadata: date, type of text (law/decree/circular/case law, etc.), source, version, article numbers, etc.
    • Deduplication & versioning: redundant documents, amendments, consolidated versions
    • Industrialization: orchestration, logs, retries, idempotence, monitoring, quality tests
    More
  • · 31 views · 7 applications · 16d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 4.5 years of experience · English - B2
    This is a short-term engagement (around 2 months) with potential to extend Role Overview We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building...

    This is a short-term engagement (around 2 months) with potential to extend
    Role Overview

    We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building reliable ETL, enforcing data quality, wiring orchestration, and making the platform observable, testable, and documented.

    You will work with production databases, APIs, Airflow, dlt, dbt, and a cloud data warehouse. The goal is to deliver data that is correct, incremental, tested, and explainable.

    Responsibilities:
    1. Key Deliverables.
    2. Transformation & Analytics (dbt).
    3. Data Quality & Testing.
    4. Documentation & Enablement.

    Required Skills & Experience:
    1. Strong experience building production ETL/ELT pipelines.

    2. Hands-on experience with dlt (or similar modern ingestion tools).

    3. Solid dbt experience (models, tests, docs).

    4. Experience with Airflow or similar workflow orchestrators.

    5. Strong SQL skills and understanding of data modeling.

    6. Experience working with large, incremental datasets.

    7. Good knowledge of Python.
    8. High English level  - B2+.

    Nice to Have

    1. Experience with fintech or high-volume transactional data.

    2. Familiarity with CI-based data testing.

    3. Experience publishing internal data catalogs or documentation portals.

    Interview stages:
    1. Interview with a recruiter.
    2.Tеchnical  Interview.
    3. Reference check.
    4. Offer.

    What We Offer:
    Full-time role with flexible hours after probation.
    Ongoing training and educational opportunities.
    Performance reviews every 6 months.
    Competitive salary in USD.
    21 paid vacation days.
    7 paid sick days (+15 for serious cases like COVID or surgery).
    10 floating public holidays.
    Online team-building events & fun corporate activities.
    Projects across diverse domains (e-commerce, healthcare, fintech, etc.).
    Clients from the USA, Canada, and Europe.
     

    More
  • · 78 views · 15 applications · 16d

    Lead\Architect Data Engineer

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field
    • At least 1+ year of experience as Lead\Architect
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications

    Ready to try your hand? Send your CV without a doubt!

    More
  • · 81 views · 11 applications · 15d

    Data Engineer

    Full Remote · Countries of Europe or Ukraine · 2 years of experience · English - B2
    We are working on a US-based data-driven product, building a scalable and cost-efficient data platform that transforms raw data into actionable business insights. For us, data engineering is not just about moving data — it’s about doing it right: with...

    We are working on a US-based data-driven product, building a scalable and cost-efficient data platform that transforms raw data into actionable business insights.

    For us, data engineering is not just about moving data — it’s about doing it right: with strong architecture, performance optimization, and automation at the core.

    Role Overview

    We are looking for a highly analytical and technically strong Data Engineer to design, build, optimize, and maintain scalable data pipelines.

    You will be responsible for the architectural integrity of the data platform, ensuring seamless data flow from ingestion to business-ready datasets.

    The ideal candidate is an expert in SQL and Python, who understands that great data engineering means:

    • cost efficiency,
    • smart partitioning and modeling,
    • performance optimization,
    • reliable automation.

    Technical Requirements

     

    Must-Have

    • Expert-Level SQL
      • Complex queries and window functions
      • Query optimization and performance tuning
      • Identifying and fixing bottlenecks
      • Reducing query complexity
    • Python
      • Data manipulation
      • Scripting
      • Building ETL / ELT frameworks
    • AWS Core Infrastructure
      • AWS Kinesis Firehose (near-real-time data streaming)
      • Amazon S3 (data storage)
    • Version Control
      • Git (GitHub / GitLab)
      • Branching strategies
      • Participation in technical code reviews

     

    Nice-to-Have

    • Modern Data Stack
      • dbt for modular SQL modeling and documentation
    • Data Warehousing
      • Google BigQuery
      • Query optimization, slot management, cost-efficient querying
    • Advanced Optimization Techniques
      • Partitioning
      • Clustering
      • Bucketing
    • Salesforce Integration
      • Experience integrating Salesforce data into various destinations
    • Docker / ECS
    • AI / ML exposure (a plus)

     

    Key Responsibilities

    • Pipeline Architecture
      • Design and implement robust data pipelines using AWS Kinesis and Python
      • Move data from raw sources to the Data Warehouse following best practices
    • Data Modeling
      • Transform raw data into clean, business-ready datasets using dbt
    • Performance Engineering
      • Optimize SQL queries and data structures for high performance and cost efficiency
    • Code Quality
      • Lead and participate in code reviews
      • Ensure high standards for performance, security, and readability
    • Collaboration
      • Work closely with Data Analysts and Product Managers
      • Translate business requirements into scalable data schemas

     

    Working Schedule

    • Monday – Friday
    • 16:00 – 00:00 Kyiv time
    • Full alignment with a US-based team and stakeholders

     

    What We Value

    • Strong ownership of data architecture
    • Ability to think beyond “just making it work”
    • Focus on scalability, performance, and cost
    • Clear communication with technical and non-technical teams
    More
  • · 160 views · 8 applications · 15d

    Data Engineer

    Full Remote · Ukraine · 1 year of experience · English - B2
    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for...

    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for industry leaders and Fortune 500 companies. Our expertise spans cloud, data, AI/ML, embedded software, IoT, and more, driving digital transformation across finance, manufacturing, telecom, healthcare, and other industries. Join N-iX and become part of a team where your ideas make a real impact.

     

    This role is ideal for someone at the beginning of their data engineering career who wants to grow in a supportive environment. We value curiosity, a learning mindset, and the ability to ask good questions. If you’re motivated to develop your skills and become a strong Data Engineer over time, we’d be happy to help you grow with us 🚀



    Responsibilities

    • Support the implementation of business logic in the Data Warehouse under the guidance of senior engineers
    • Assist in translating business requirements into basic data models and transformations
    • Help develop, maintain, and monitor ETL pipelines using Azure Data Factory
    • Participate in data loading, validation, and basic query performance optimization
    • Work closely with senior team members and customer stakeholders to understand requirements and data flows
    • Contribute to documentation and follow best practices in data engineering and development
    • Gradually propose improvements and ideas as experience grows

       

    Requirements

    • Up to 1,5 years of experience in Data Engineering
    • Basic hands-on experience with SQL and strong willingness to work with it as a core skill
    • Familiarity with Microsoft Azure or strong motivation to learn Azure-based data solutions
    • Understanding of relational databases and fundamentals of data modeling
    • Ability to write clear and maintainable SQL queries
    • Basic experience with version control systems (e.g. Git)
    • Interest in data warehousing and analytical systems
    • Familiarity with Agile ways of working (through coursework, internships, or first commercial experience)
    • Strong analytical thinking and eagerness to learn from more experienced colleagues

       

    Nice to Have

    • Exposure to Azure Data Factory, dbt, or similar ETL tools
    • Basic knowledge of Databricks
    • Understanding of Supply Chain & Logistics concepts
    • Any experience working with SAP data (MM or related modules)
       
    More
  • · 31 views · 1 application · 15d

    Senior Data Engineer

    Full Remote · Ukraine · 3 years of experience · English - C1
    We are looking for a Senior Data Engineer for staff augmentation Must-have: Snowflake / SQL AWS stack (S3, Glue, Lambda, CloudWatch, IAM) Python Terraform (IaC) English: C1+ Nice to have: Досвід роботи з REST APIs Airflow для...

    We are looking for a Senior Data Engineer for staff augmentation

    Must-have:

    Snowflake / SQL 

    AWS stack (S3, Glue, Lambda, CloudWatch, IAM)

    Python

    Terraform (IaC)

    English: C1+ 

    Nice to have:

    Досвід роботи з REST APIs

    Airflow для оркестрації

    CircleCI

    JavaScript

    Apache Kafka або Hadoop

    More
Log In or Sign Up to see all posted jobs