Jobs Data Engineer

163
  • · 58 views · 4 applications · 19d

    Senior Data Engineer/NextFlow Engineer

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    Meet the YozmaTech YozmaTech isn’t just another tech company – we’re a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster. We build dedicated development teams across 10+ countries,...

    Meet the YozmaTech

    YozmaTech isn’t just another tech company – we’re a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster.
    We build dedicated development teams across 10+ countries, creating strong, long-term partnerships based on trust, transparency, and real impact.


    Here, every idea counts. We value people who are proactive, open-minded, and ready to grow. If you’re passionate about building meaningful products and want to join a team that feels like family – you’ll feel right at home with us.

     

    Our client we’re seeking a skilled Data Engineer / Software Developer with deep experience in building, maintaining and optimizing reproducible data processing pipelines for large-scale scientific data (bioinformatics, genomics, proteomics, or related domains). The successful candidate will bridge computational engineering best practices with biological data challenges, enabling scientists to move from raw data to reliable insights at scale.

     

    You will work with interdisciplinary teams of researchers, computational scientists, and domain
    experts to design end-to-end workflows, ensure data quality and governance, and implement
    infrastructure that powers scientific discovery.


    Prior experience with Nextflow or similar workflow systems is strongly preferred.

     

    Key Requirements:
    🔹 Experience with NextFlow and bioinformatics pipelines;
    🔹 Strong programming skills in languages such as Python;
    🔹 Experience with data processing and pipeline development;
    🔹 Familiarity with Linux environments, cloud computing workflows;

     

    Domain Experience:
    🔹 Prior work in scientific data environments or life sciences research (genomics/proteomics/high-throughput data) is highly desirable;

     

    Soft Skills:
    🔹 Strong problem-solving, communication, and organization skills; ability to manage multiple projects and deliverables;
    🔹 Comfortable collaborating with researchers from biological, computational, and engineering disciplines;
    🔹 English – Upper-Intermediate or higher.

     

    Will be plus:
    🔹 Experience with cloud-based infrastructure and containerization (e.g., Docker);
    🔹 Familiarity with AI and machine learning concepts;
    🔹 Experience with agile development methodologies and version control systems (e.g., Git);

     

    What you will do:
    🔹 Design, develop, and maintain high-performance, portable data pipelines using NextFlow;
    🔹 Collaborate with data scientists and researchers to integrate new algorithms and features into the pipeline;
    🔹 Ensure the pipeline is scalable, efficient, and well-organized;
    🔹 Develop and maintain tests to ensure the pipeline is working correctly;
    🔹 Work with the DevOps team to deploy and manage the pipeline on our infrastructure;
    🔹 Participate in design meetings and contribute to the development of new features and algorithms;

     

    Interview stages:
    🔹 HR Interview;
    🔹 Technical Interview;
    🔹 Reference Check;
    🔹 Offer;

     

    Why Join Us?

    At YozmaTech, we’re self-starters who grow together. Every day, we tackle real challenges for real products – and have fun doing it. We work globally, think entrepreneurially, and support each other like family. We invest in your growth and care about your voice. With us, you’ll always know what you’re working on and why it matters.
    From day one, you’ll get:
    🔹 Direct access to clients and meaningful products;
    🔹 Flexibility to work remotely or from our offices;
    🔹 A-team colleagues and a zero-bureaucracy culture;
    🔹 Opportunities to grow, lead, and make your mark.

     

    After you apply

    We’ll keep it respectful, clear, and personal from start to offer.
    You’ll always know what project you’re joining – and how you can grow with us.

    Everyone’s welcome

    Diversity makes us better. We create a space where you can thrive as you are.

    Ready to build something meaningful?

    Let’s talk. Your next big adventure might just start here.

    More
  • · 35 views · 1 application · 22d

    Azure Data Engineer (ETL Developer)

    Hybrid Remote · Ukraine · Product · 3 years of experience · English - B1
    Kyivstar is looking for Azure Data Engineer (Developer) to drive different life cycles of large systems. The Data Lifecycle Engineer will have opportunity to help customers realize their full potential through accelerated adoption and productive use of...

    Kyivstar is looking for Azure Data Engineer (Developer) to drive different life cycles of large systems. The Data Lifecycle Engineer will have opportunity to help customers realize their full potential through accelerated adoption and productive use of Microsoft Data and AI technologies.

     

    Requirements:

    — 3+ years of technical expertise in Database development (preferably with SQL, including Azure SQL) – designing and building database solutions (tables / stored procedures / forms / queries / etc.);

    — Business intelligence knowledge with a deep understanding of data structure / data models to design and tune BI solutions;

    — Advanced data analytics – designing and building solutions using technologies such as Databricks, Azure Data Factory, Azure Data Lake, HD Insights, SQL DW, stream analytics, machine learning, R server;

    — Data formats knowledge and the differences between them;

    — Experience with Hadoop stack;

    — Experience with RDBMS and/or NoSQL;

    — Experience with Kafka;

    — Experience with Java and/or Scala and/or Python;

    — Knowledge of version control system: git or bitbucket;

    — BI Tools experience (PowerBI); 

    — Background in test driven development, automated testing and other software engineering best practices (e.g., performance, security, BDD, etc.);

    — Docker/Kubernetes paradigm understanding;

    English – strong intermediate;

    — Microsoft Certified is a plus.

     

    Responsibility:

    — Developing ETL flows based on Azure Cloud stack technology: such as Databricks, Azure Data Factory, Azure Data Lake, HD Insights, SQL DW, stream analytics, machine learning, R server;

    — Troubleshooting and performance optimization for data processing flows, data models;

    — Build and maintain reporting, models, dashboards.

     

     

    We offer:

    — A unique experience of working the most customers beloved and largest mobile operator in Ukraine;

    — Real opportunity to ship digital products to millions of customers;

    — To contribute into building the biggest analytical cloud environment in Ukraine;

    — To create Big Data/AI products, changing the whole industry and influencing Ukraine;

    — To be involved in real Big Data projects with Petabytes of data and Billions of events daily processed in Real-time;  

    — A competitive salary;

    — Great possibilities for professional development and career growth;

    — Medical insurance;

    — Life insurance;

    — Friendly & Collaborative Environment.

    More
  • · 74 views · 14 applications · 22d

    Senior Data Engineer to $6000

    Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · English - B2
    Job Description Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions Proficiency in Python and SQL for building ingestion, transformation, and processing...

    Job Description

    • Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions
    • Proficiency in Python and SQL for building ingestion, transformation, and processing workflows
    • Clear understanding of Lakehouse architecture principles, Delta Lake patterns, and modern data warehousing
    • Practical experience building config-driven ETL/ELT pipelines, including API integrations and Change Data Capture (CDC)
    • Working knowledge of relational databases (MS SQL, PostgreSQL) and exposure to NoSQL concepts
    • Ability to design data models and schemas optimized for analytics and reporting workloads
    • Comfortable working with common data formats: JSON, Parquet, CSV
    • Experience with CI/CD automation for data workflows (GitHub Actions, Azure DevOps, or similar)
    • Familiarity with data governance practices: lineage tracking, access control, encryption
    • Strong problem-solving mindset with attention to detail
    • Clear written and verbal communication for async collaboration

       

    Nice-to-Have

    • Proficiency with Apache Spark using PySpark for large-scale data processing
    • Experience with Azure Service Bus/Event Hub for event-driven architectures
    • Familiarity with machine learning and AI integration within data platform context (RAG, vector search, Azure AI Search)
    • Data quality frameworks (Great Expectations, dbt tests)
    • Experience with Power BI semantic models and Row-Level Security

       

    Job Responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Azure Data Factory, Synapse, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for batch and API-based data ingestion
    • Build Medallion architecture layers (Bronze → Silver → Gold) ensuring efficient, reliable, and performant data processing
    • Ensure data governance, lineage, and compliance using Azure Key Vault and proper access controls
    • Collaborate with developers and business analysts to deliver trusted datasets for reporting, analytics, and AI/ML use cases
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Implement cross-system identity resolution (global IDs, customer/property keys across multiple platforms)
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

       

    Why TeamCraft?

    • Greenfield project - build architecture from scratch, no legacy debt
    • Direct impact - your pipelines power real AI products and business decisions
    • Small team, big ownership - no bureaucracy, fast iteration, your voice matters
    • Stable foundation - US-based company, 300+ employees
    • Growth trajectory - scaling with technology as the driver

     

    About the Project

    TeamCraft is a large U.S. commercial roofing company undergoing an ambitious AI transformation. We’re building a centralized data platform from scratch - a unified Azure Lakehouse that integrates multiple operational systems into a single source of truth (Bronze -> Silver -> Gold).

    This is greenfield development with real business outcomes - not legacy maintenance.

    More
  • · 33 views · 1 application · 22d

    Senior Data Engineer

    Full Remote · Ukraine · 3 years of experience · English - C1
    We are looking for a Senior Data Engineer for staff augmentation Must-have: Snowflake / SQL AWS stack (S3, Glue, Lambda, CloudWatch, IAM) Python Terraform (IaC) English: C1+ Nice to have: Досвід роботи з REST APIs Airflow для...

    We are looking for a Senior Data Engineer for staff augmentation

    Must-have:

    Snowflake / SQL 

    AWS stack (S3, Glue, Lambda, CloudWatch, IAM)

    Python

    Terraform (IaC)

    English: C1+ 

    Nice to have:

    Досвід роботи з REST APIs

    Airflow для оркестрації

    CircleCI

    JavaScript

    Apache Kafka або Hadoop

    More
  • · 179 views · 12 applications · 22d

    Data Engineer

    Full Remote · Ukraine · 1 year of experience · English - B2
    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for...

    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for industry leaders and Fortune 500 companies. Our expertise spans cloud, data, AI/ML, embedded software, IoT, and more, driving digital transformation across finance, manufacturing, telecom, healthcare, and other industries. Join N-iX and become part of a team where your ideas make a real impact.

     

    This role is ideal for someone at the beginning of their data engineering career who wants to grow in a supportive environment. We value curiosity, a learning mindset, and the ability to ask good questions. If you’re motivated to develop your skills and become a strong Data Engineer over time, we’d be happy to help you grow with us 🚀



    Responsibilities

    • Support the implementation of business logic in the Data Warehouse under the guidance of senior engineers
    • Assist in translating business requirements into basic data models and transformations
    • Help develop, maintain, and monitor ETL pipelines using Azure Data Factory
    • Participate in data loading, validation, and basic query performance optimization
    • Work closely with senior team members and customer stakeholders to understand requirements and data flows
    • Contribute to documentation and follow best practices in data engineering and development
    • Gradually propose improvements and ideas as experience grows

       

    Requirements

    • Up to 1,5 years of experience in Data Engineering
    • Basic hands-on experience with SQL and strong willingness to work with it as a core skill
    • Familiarity with Microsoft Azure or strong motivation to learn Azure-based data solutions
    • Understanding of relational databases and fundamentals of data modeling
    • Ability to write clear and maintainable SQL queries
    • Basic experience with version control systems (e.g. Git)
    • Interest in data warehousing and analytical systems
    • Familiarity with Agile ways of working (through coursework, internships, or first commercial experience)
    • Strong analytical thinking and eagerness to learn from more experienced colleagues

       

    Nice to Have

    • Exposure to Azure Data Factory, dbt, or similar ETL tools
    • Basic knowledge of Databricks
    • Understanding of Supply Chain & Logistics concepts
    • Any experience working with SAP data (MM or related modules)
       
    More
  • · 98 views · 11 applications · 22d

    Data Engineer

    Full Remote · Countries of Europe or Ukraine · 2 years of experience · English - B2
    We are working on a US-based data-driven product, building a scalable and cost-efficient data platform that transforms raw data into actionable business insights. For us, data engineering is not just about moving data — it’s about doing it right: with...

    We are working on a US-based data-driven product, building a scalable and cost-efficient data platform that transforms raw data into actionable business insights.

    For us, data engineering is not just about moving data — it’s about doing it right: with strong architecture, performance optimization, and automation at the core.

    Role Overview

    We are looking for a highly analytical and technically strong Data Engineer to design, build, optimize, and maintain scalable data pipelines.

    You will be responsible for the architectural integrity of the data platform, ensuring seamless data flow from ingestion to business-ready datasets.

    The ideal candidate is an expert in SQL and Python, who understands that great data engineering means:

    • cost efficiency,
    • smart partitioning and modeling,
    • performance optimization,
    • reliable automation.

    Technical Requirements

     

    Must-Have

    • Expert-Level SQL
      • Complex queries and window functions
      • Query optimization and performance tuning
      • Identifying and fixing bottlenecks
      • Reducing query complexity
    • Python
      • Data manipulation
      • Scripting
      • Building ETL / ELT frameworks
    • AWS Core Infrastructure
      • AWS Kinesis Firehose (near-real-time data streaming)
      • Amazon S3 (data storage)
    • Version Control
      • Git (GitHub / GitLab)
      • Branching strategies
      • Participation in technical code reviews

     

    Nice-to-Have

    • Modern Data Stack
      • dbt for modular SQL modeling and documentation
    • Data Warehousing
      • Google BigQuery
      • Query optimization, slot management, cost-efficient querying
    • Advanced Optimization Techniques
      • Partitioning
      • Clustering
      • Bucketing
    • Salesforce Integration
      • Experience integrating Salesforce data into various destinations
    • Docker / ECS
    • AI / ML exposure (a plus)

     

    Key Responsibilities

    • Pipeline Architecture
      • Design and implement robust data pipelines using AWS Kinesis and Python
      • Move data from raw sources to the Data Warehouse following best practices
    • Data Modeling
      • Transform raw data into clean, business-ready datasets using dbt
    • Performance Engineering
      • Optimize SQL queries and data structures for high performance and cost efficiency
    • Code Quality
      • Lead and participate in code reviews
      • Ensure high standards for performance, security, and readability
    • Collaboration
      • Work closely with Data Analysts and Product Managers
      • Translate business requirements into scalable data schemas

     

    Working Schedule

    • Monday – Friday
    • 16:00 – 00:00 Kyiv time
    • Full alignment with a US-based team and stakeholders

     

    What We Value

    • Strong ownership of data architecture
    • Ability to think beyond “just making it work”
    • Focus on scalability, performance, and cost
    • Clear communication with technical and non-technical teams
    More
  • · 83 views · 16 applications · 23d

    Lead\Architect Data Engineer

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field
    • At least 1+ year of experience as Lead\Architect
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications

    Ready to try your hand? Send your CV without a doubt!

    More
  • · 32 views · 7 applications · 23d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 4.5 years of experience · English - B2
    This is a short-term engagement (around 2 months) with potential to extend Role Overview We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building...

    This is a short-term engagement (around 2 months) with potential to extend
    Role Overview

    We are looking for a hands-on Data Platform Engineer to complete and harden our data ingestion and transformation pipelines. This role is execution-heavy: building reliable ETL, enforcing data quality, wiring orchestration, and making the platform observable, testable, and documented.

    You will work with production databases, APIs, Airflow, dlt, dbt, and a cloud data warehouse. The goal is to deliver data that is correct, incremental, tested, and explainable.

    Responsibilities:
    1. Key Deliverables.
    2. Transformation & Analytics (dbt).
    3. Data Quality & Testing.
    4. Documentation & Enablement.

    Required Skills & Experience:
    1. Strong experience building production ETL/ELT pipelines.

    2. Hands-on experience with dlt (or similar modern ingestion tools).

    3. Solid dbt experience (models, tests, docs).

    4. Experience with Airflow or similar workflow orchestrators.

    5. Strong SQL skills and understanding of data modeling.

    6. Experience working with large, incremental datasets.

    7. Good knowledge of Python.
    8. High English level  - B2+.

    Nice to Have

    1. Experience with fintech or high-volume transactional data.

    2. Familiarity with CI-based data testing.

    3. Experience publishing internal data catalogs or documentation portals.

    Interview stages:
    1. Interview with a recruiter.
    2.Tеchnical  Interview.
    3. Reference check.
    4. Offer.

    What We Offer:
    Full-time role with flexible hours after probation.
    Ongoing training and educational opportunities.
    Performance reviews every 6 months.
    Competitive salary in USD.
    21 paid vacation days.
    7 paid sick days (+15 for serious cases like COVID or surgery).
    10 floating public holidays.
    Online team-building events & fun corporate activities.
    Projects across diverse domains (e-commerce, healthcare, fintech, etc.).
    Clients from the USA, Canada, and Europe.
     

    More
  • · 49 views · 1 application · 23d

    Senior Data Engineer

    Full Remote · Ukraine · Product · 5 years of experience · English - B1
    Zoral Labs, a leading provider of research and development to the software industry, is looking for an experienced Senior Data Engineer to join its development center remotely Required skills: 5+ years of enterprise experience in a similar position...

    Zoral Labs, a leading provider of research and development to the software industry, is looking for an experienced Senior Data Engineer to join its development center remotely

    Required skills:

    • 5+ years of enterprise experience in a similar position 
    • Expert knowledge of Python - experience with data pipelines and data frames
    • Expert knowledge of SQL and DBMS (any) on logical level. Knowledge of physical details will be a plus.
    • Experience with GCP (BigQuery, Composer, GKE, Storage, Logging and Monitoring, Services API etc.)
    • Understanding of DWH and DLH (Inmon vs Kimbal, medallion, ETL/ELT)
    • Columnar data management and/or NoSQL system(s) experience
    • Enterprise-like working environment understanding and acceptance


    Soft skills:

    • Fast learner, open-minded, goal oriented. Problem solver
    • Analytical thinking, proper communication
    • English B1+


    Project description:

    We specialize in advanced software fields such as BI, Data Mining, Artificial Intelligence, Machine Learning (AI/ML), High Speed Computing, Cloud Computing, BIG Data Predictive Analytics, Unstructured Data processing, Finance, Risk Management and Security.

    We create extensible decision engine services, data analysis and management solutions, real-time automatic data processing applications.
    We are looking for the software engineers to design, build and implement large, scalable web service architecture with decision engine used as its base. If you are excited about development of artificial intellect, behavior analysis data solutions, big data approach then we can give you an opportunity to reveal your talents.

     

    About Zoral Labs:

    Zoral is a fintech software research and development company. We were founded in 2004.

    We operate one of largest labs in Europe focused on Artificial Intelligence/Machine Learning (AI/ML), predictive systems for consumer/SME credit and financial products.

    Our clients are based in USA, Canada, Europe, Africa, Asia, South America and Australia.

    We are one of the world’s leading companies in the use of unstructured, social, device, MNO, bureau and behavioral data, for real-time decisioning and predictive modeling.

    Zoral software intelligently automates digital financial products.

    Zoral produced the world’s first, fully automated, STP consumer credit platforms.

    We are based in London, New York and Berlin 

    Web site:
    https://zorallabs.com/company

    Company page at DOU:
    https://jobs.dou.ua/companies/zoral/

    More
  • · 15 views · 0 applications · 23d

    IT Infrastructure Administrator

    Office Work · Ukraine (Dnipro) · Product · 1 year of experience · English - None
    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...

    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.

    Key responsibilities:

    • Administration of Active Directory
    • Managing group policies
    • Managing services via PowerShell
    • Administration of VMWare platform
    • Administration of Azure Active Directory
    • Administration of Exchange 2016/2019 mail servers
    • Administration of Exchange Online
    • Administration of VMWare Horizon View

    Required professional knowledge and skills:

    • Experience in writing automation scripts (PowerShell, Python, etc.)
    • Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
    • Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
    • Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
    • Windows Server 2019/2025 (installation, configuration, and adaptation)
    • Diagnostics and troubleshooting
    • Working with anti-spam systems
    • Managing mail transport systems (exim) and monitoring systems (Zabbix)

    We offer:

    • Interesting projects and tasks
    • Competitive salary (discussed during the interview)
    • Convenient work schedule: Mon–Fri, 9:00–18:00; partial remote work possible
    • Official employment, paid vacation, and sick leave
    • Probation period — 2 months
    • Professional growth and training (internal training, reimbursement for external training programs)
    • Discounts on Biosphere Corporation products
    • Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)

    Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).

    Learn more about Biosphere Corporation, our strategy, mission, and values at:
    http://biosphere-corp.com/
    https://www.facebook.com/biosphere.corporation/

    Join our team of professionals!

    By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
    If your application is successful, we will contact you within 1–2 business days.

    More
  • · 32 views · 1 application · 24d

    Senior Data Engineer

    Full Remote · Poland · 7 years of experience · English - B2
    Job Description Total of 7+ years of development/design experience with a minimum of 5 years of experience in Big Data technologies on-prem or on cloud. Experience with architecting, building, implementing, and managing Big Data platforms On Cloud,...

    Job Description

    • Total of 7+ years of development/design experience with a minimum of 5 years of experience in Big Data technologies on-prem or on cloud.
    • Experience with architecting, building, implementing, and managing Big Data platforms On Cloud, covering ingestion (Batch and Real-time), processing (Batch and Real-time), Polyglot Storage, Data Analytics, and Data Access
    • Good understanding of Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog
    • Proven understanding and demonstrable implementation experience of big data platform technologies on cloud (AWS and Azure), including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc.
    • Experience working with Enterprise Data Warehouse technologies, Multi-Dimensional Data Modeling, Data Architectures or other work related to the construction of enterprise data assets
    • Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring
    • Experience building stream-processing systems, using solutions such as Apache Spark, Databricks, Kafka etc.
    • Experience with Spark/Databricks technology is a must
    • Experience with Big Data querying tools
    • Solid skills in Python
    • Strong experience with data modelling and schema design
    • Strong SQL programming background
    • Excellent interpersonal and teamwork skills
    • Experience to drive solution/enterprise-level architecture, collaborate with other tech leads
    • Strong problem solving, troubleshooting and analysis skills
    • Experience working in a geographically distributed team
    • Experience with leading and mentorship of other team members
    • Good knowledge of Agile Scrum
    • Good communication skills

     

    Job Responsibilities

    • Work directly with the Client teams to understand the requirements/needs and rapidly prototype data and analytics solutions based upon business requirements
    • Design, implement, and manage large-scale data platform/applications, including ingestion, processing, storage, data access, data governance capabilities and related infrastructure
    • Support design and development of solutions for the deployment of data analytics notebooks, tools, dashboards and reports to various stakeholders
    • Communication with Product/DevOps/Development/QA team
    • Architect data pipelines and ETL/ELT processes to connect with various data sources
    • Design and maintain enterprise data warehouse models
    • Take part in the performance optimization processes
    • Guide on research activities (PoC) if necessary
    • Manage cloud based data & analytics platform
    • Establishing best practices with CI\CD under BigData scope
    More
  • · 61 views · 6 applications · 24d

    Senior Analytics Engineer (DBT)

    Full Remote · Countries of Europe or Ukraine · 7 years of experience · English - B2
    We are seeking a hands-on Data and BI expert to take ownership of the client's internal data modelling stack. If you are proficient in SQL and DBT, this role may be a great fit for you. Hiring stages: - HR interview - Technical interview - Exam -...

    We are seeking a hands-on Data and BI expert to take ownership of the client's internal data modelling stack. If you are proficient in SQL and DBT, this role may be a great fit for you.

    Hiring stages:
    - HR interview 
    - Technical interview
    - Exam
    - Reference check

    About a project:
    Platform delivers an AI-powered command center for tracking KPIs across teams, turning raw data into actionable insights without complex setups. This Israel-based SaaS platform automates performance analysis, supports HR metrics, and integrates seamlessly with over 50 tools for real-time dashboards and alerts.

    Required Qualifications:
    - 7+ years in a Data Analyst, BI Developer, Analytics Engineer, or similar role
    - Expert-level SQL skills with deep experience in PostgreSQL
    - 4+ years of production DBT development (including model structure, tests, and deployment)
    - Highly proficient with DBT (Data Build Tool) - is a must
    - Upper Intermediate English level and higher
    - Solid understanding of Git-based workflows and CI/CD for analytics code
    - Detail-oriented, independent, and confident in communicating technical decisions

    Nice-to-Have:
    - Experience with modern cloud data warehouses (e.g. Snowflake, BigQuery, Redshift)
    - Familiarity with ETL & orchestration tools (e.g. Airbyte, Fivetran)
    - Understanding of data governance, data cataloguing, and metadata management
    - Comfortable working in high-growth startup environments with evolving systems and priorities

    Key Responsibilities
    - Design, build, and maintain modular DBT models powering customer-facing KPIs
    - Define and implement data modelling best practices, including testing, documentation, and deployment
    - Review and optimise complex data pipelines with a focus on performance and clarity
    - Monitor and improve PostgreSQL performance, indexing, and schema structure
    - Debug and troubleshoot issues across the entire data flow—from source connectors to dashboards
    - Collaborate closely with product and engineering to support rapid iteration and insights delivery
     

    More
  • · 27 views · 1 application · 24d

    Data Engineer (DBT, Snowflake), Investment Management Solution

    Ukraine, Poland, Georgia, Armenia, Cyprus · 5 years of experience · English - None
    Client Our client is one of the world’s top 20 investment companies headquartered in Great Britain, with branch offices in the US, Asia, and Europe. Project overview The company’s IT environment is constantly growing, with around 30 programs and more...

    Client

    Our client is one of the world’s top 20 investment companies headquartered in Great Britain, with branch offices in the US, Asia, and Europe.

     

    Project overview

    The company’s IT environment is constantly growing, with around 30 programs and more than 60 active projects. They are building a data marketplace that aggregates and analyzes data from multiple sources such as stock exchanges, news feeds, brokers, and internal quantitative systems.

    As the company moves to a new data source, the main goal of this project is to create a golden source of data for all downstream systems and applications. The team is performing classic ELT/ETL: transforming raw data from multiple sources (third-party and internal) and creating a single interface for delivering data to downstream applications.

     

    Position overview

    We are looking for a Data Engineer with strong expertise in DBT, Snowflake, and modern data engineering practices. In this role, you will design and implement scalable data models, build robust ETL/ELT pipelines, and ensure high-quality data delivery for critical investment management applications.

     

    Responsibilities

    • Design, build, and deploy DBT Cloud models.
    • Design, build, and deploy Airflow jobs (Astronomer).
    • Identify and test for bugs and bottlenecks in the ELT/ETL solution.

     

    Requirements

    • 5+ years of experience in software engineering (GIT, CI/CD, Shell scripting).
    • 3+ years of experience building scalable and robust Data Platforms (SQL, DWH, Distributed Data Processing).
    • 2+ years of experience developing in DBT Core/Cloud.
    • 2+ years of experience with Snowflake.
    • 2+ years of experience with Airflow.
    • 2+ years of experience with Python.
    • Good spoken English.

     

    Nice to have

    • Proficiency in message queues (Kafka).
    • Experience with cloud services (Azure).
    • CI/CD knowledge (Jenkins, Groovy scripting).
    More
  • · 49 views · 5 applications · 25d

    Power Platform Consultant / Automation Specialist

    Full Remote · Ukraine · 3 years of experience · English - B2
    Must-have skills (top 3) Power Automate (design and implementation of productive workflows) Power Apps (canvas/integration with processes) Power BI (basic understanding of reporting and data models) Experience At least 3 years, ideally 5+ years with...

    Must-have skills (top 3)
    Power Automate (design and implementation of productive workflows)
    Power Apps (canvas/integration with processes)
    Power BI (basic understanding of reporting and data models)


    Experience
    At least 3 years, ideally 5+ years with Power Automate & Power Apps
    Experience with business process automation (workflows, email automation, approvals, etc.)
    Consulting on the optimal use of Power Automate in the company
    Ideally, initial exposure to AI integrations (e.g., AI Builder, Copilot, external APIs)


    Nice-to-have
    Supply chain context
    SAP as source system
    Snowflake

    Industry: Pharmaceuticals/manufacturing


    Language
    English: very good written and spoken
    German: very good desirable, but not essential

     

    We look forward to you application, CV and project experience description!

    More
  • · 66 views · 9 applications · 26d

    Senior Data Platform Engineer

    Full Remote · Countries of Europe or Ukraine · Product · 8 years of experience · English - B2
    Position Summary: We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are...

    Position Summary:

    We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are flexible with considering applications from anywhere in Europe.

    More details: crystalblockchain.com

    Duties and responsibilities:

    • Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within Crystal's platform;
    • Active participation in development and maintenance of our data pipelines and backend services;
    • Integrate new technologies into our processes and tools;
    • End-to-end feature designing and implementation;
    • Code, debug, test and deliver features and improvements in a continuous manner;
    • Provide code review, assistance and feedback for other team members.


    Required:

    • 8+ years of experience developing Python backend services and APIs;
    • Advanced knowledge of SQL - ability to write, understand and debug complex queries;
    • Data Warehousing and database basic architecture principles;
    • POSIX/Unix/Linux ecosystem knowledge;
    • Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI;
    • Knowledge about blockchain technologies or willingness to learn;
    • Experience with PostgreSQL database system;
    • Knowledge of Unit Testing principles;
    • Experience with Docker containers and proven ability to migrate existing services;
    • Independent and autonomous way of working;
    • Team-oriented work and good communication skills are an asset.


    Would be a plus:

    • Practical experience in big data and frameworks – Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse;
    • Knowledge of Kubernetes and Infrastructure as Code – Terraform and Ansible;
    • Passion for Bitcoin and Blockchain technologies;
    • Experience with distributed systems;
    • Experience with opensource solutions;
    • Experience with Java or willingness to learn.
    More
Log In or Sign Up to see all posted jobs