Jobs

166
  • Β· 21 views Β· 0 applications Β· 14d

    Senior Data Engineer IRC278987

    Full Remote Β· Ukraine Β· 3.5 years of experience Β· B2 - Upper Intermediate
    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
     

    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm.

     

    The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.

     

    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives.

     

    If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

     

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 17 views Β· 0 applications Β· 14d

    Senior Data Platform Architect

    Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper Intermediate
    Project Description: We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them...
    • Project Description:

      We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the platform effectively within an IT ecosystem.

       

    • Responsibilities:

      β€’ Manage and optimize data platforms (Databricks, Palantir).
      β€’ Ensure high availability, security, and performance of data systems.
      β€’ Provide valuable insights about data platform usage.
      β€’ Optimize computing and storage for large-scale data processing.
      β€’ Design and maintain system libraries (Python) used in ETL pipelines and platform governance.
      β€’ Optimize ETL Processes – Enhance and tune existing ETL processes for better performance, scalability, and reliability.

       

    • Mandatory Skills Description:

      β€’ Minimum 10 Years of experience in IT/Data.
      β€’ Minimum 5 years of experience as a Data Platform Engineer/Data Engineer.
      β€’ Bachelor's in IT or related field.
      β€’ Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).
      β€’ Data Platform Tools: Any of Palantir, Databricks, Snowflake.
      β€’ Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
      β€’ SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
      β€’ Data Warehousing: Experience working with data warehousing concepts and platforms, ideally Databricks.
      β€’ ETL Tools: Familiarity with ETL tools & processes
      β€’ Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
      β€’ Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
      β€’ Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
      β€’ Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.

       

    • Nice-to-Have Skills Description:

      β€’ Containerization & Orchestration: Docker, Kubernetes.
      β€’ Infrastructure as Code (IaC): Terraform.
      β€’ Understanding of Investment Data domain (desired).

       

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 37 views Β· 2 applications Β· 14d

    Senior Data Engineer (with Snowflake)

    Full Remote Β· Czechia, Spain, Poland, Romania, Slovakia Β· 4 years of experience Β· B2 - Upper Intermediate
    Join an exciting journey to create a greenfield, cutting-edge Consumer Data Lake for a leading global organization based in Europe. This platform will unify, process, and leverage consumer data from various systems, unlocking advanced analytics, insights,...

    Join an exciting journey to create a greenfield, cutting-edge Consumer Data Lake for a leading global organization based in Europe. This platform will unify, process, and leverage consumer data from various systems, unlocking advanced analytics, insights, and personalization opportunities. As a Senior Data Engineer, you will play a pivotal role in shaping and implementing the platform's architecture, focusing on hands-on technical execution and collaboration with cross-functional teams.

    Your work will transform consumer data into actionable insights and personalization on a global scale. Using advanced tools to tackle complex challenges, you’ll innovate within a collaborative environment alongside skilled architects, engineers, and leaders.

     

    Key Responsibilities:

     

    • Hands-On Development: Build, maintain, and optimize data pipelines for ingestion, transformation, and activation.
    • Create and implement scalable solutions to handle diverse data sources and high volumes of information
    • Data Modeling & Warehousing: Design and maintain efficient data models and schemas for a cloud-based data platform.
    • Develop pipelines to ensure data accuracy, integrity, and accessibility for downstream analytics.
    • Collaboration: Partner with Solution Architects to translate high-level designs into detailed implementation plans.
    • Work closely with Technical Product Owners to align data solutions with business needs.
    • Collaborate with global teams to integrate data from diverse platforms, ensuring scalability, security, and accuracy.
    • Platform Development: Enable data readiness for advanced analytics, reporting, and segmentation.
    • Implement robust frameworks to monitor data quality, accuracy, and performance.
    • Testing & Quality Assurance: Implement robust security measures to protect sensitive consumer data at every stage of the pipeline
    • Ensure compliance with data privacy regulations (e.g., GDPR, CCPA ..) and internal policies.
    • Monitor and address potential vulnerabilities, ensuring the platform adheres to security best practices.

     

    Requirements:

     

    • Over 4+ years of experience showcasing technical expertise and critical thinking in data engineering.
    • Hands-on experience with DBT and strong Python programming skills.
    • Proficiency in Snowflake and expertise in data modeling are essential.
    • Demonstrated experience in building consumer data lakes and developing consumer analytics capabilities is required.
    • In-depth understanding of privacy and security engineering within Snowflake, including concepts like RBAC, dynamic/tag-based data masking, row-level security/access policies, and secure views.
    • Ability to design, implement, and promote advanced solution patterns and standards for solving complex challenges.
    • Strong experience with Azure and familiarity with other cloud platforms
    • Practical experience with Big Data batch and streaming tools.
    • Competence in SQL, NoSQL, relational database design (SAP HANA experience is a bonus), and efficient methods for data retrieval and preparation at scale.
    • Proven ability to collect and process raw data at scale, including scripting, web scraping, API integration, and SQL querying.
    • Experience working in global environments and collaborating with virtual teams.
    • A Bachelor’s or Master’s degree in Data Science, Computer Science, Economics, or a related discipline.
    More
  • Β· 90 views Β· 3 applications Β· 14d

    Trainee Microsoft Business Intelligence to $500

    Hybrid Remote Β· Ukraine (Lviv) Β· B1 - Intermediate
    Ready to turn your curiosity about data into real skills? We’re looking for a Trainee BI who’s eager to learn fast and dive deep into real data challenges. This role offers a great opportunity to work on a large-scale enterprise project as part of a big,...

    Ready to turn your curiosity about data into real skills?

    We’re looking for a Trainee BI who’s eager to learn fast and dive deep into real data challenges.
    This role offers a great opportunity to work on a large-scale enterprise project as part of a big, supportive development team that will help you grow into a confident BI specialist and a strong team player.
    You’ll develop under the mentorship of an experienced Senior Developer, learning best practices and real-world problem solving from day one.

    The benefits of our student program:
    β€” Personal mentor for each student
    β€” 100% employment in case of successful completion of the student program
    β€” Student program for three months. In this period, the stipend will be paid ($300 gross per month)

    About project:
    Country: USA
    Business Field: eCommerce, MLM
    The team consists of 30 people

    Required Skills:

    • Living in Lviv (ability to work in office)
    • English β€” Intermediate or higher(written and spoken)

    Microsoft SQL Server:

    • Basic understanding of relational databases and data modeling principles
    • Basic experience in writing SQL queries (CTEs, joins, managing stored procedures, functions, triggers)
    • Basic understanding of what indexes do
    • Basic understanding of how to analyze data in big databases with a lot of tables and data

    Nice to have:

    • Experience in using modern Integrated Development Platforms (i.e., Visual Studio, SSMS)
    • Ability to create basic SSRS report
    • Basic understanding of SSIS packages, ETL solutions

       

    What do we offer after successful completion of the student program:

    Salary $500 gross per month
    Interesting work as part of the professional team
    Opportunities for career and professional development
    English classes and other L&D activities like courses and certifications reimbursement
    Recognition gifts and awards
    New benefit program with the opportunity for compensation to cover the following expenses of their choice:

    • Sports activities (these can be one-time sessions or monthly or annual subscriptions)
    • Payment of medical expenses (treatment and preventive procedures at the dentist, services, and consultations with doctors, medical examinations, and tests)
    • Psychologist consultations
    • Medical insurance
    • Training for the development of professional skills (hard and soft skills), including the purchase of necessary educational literature
    • Courses in traditional arts and creativity (such as drawing, music, photography).
    • Individual English language improvement classes- Employee referral program

    21 working days β€” paid vacations
    10 paid sick leaves
    IT club membership

    If you’re eager to learn, curious about data, and ready to grow in a friendly, professional environment, feel free to email me or send your CV through the site.

    The next steps are:

    1. Perform test task.
    2. Short HR interview β€” 15-30 minutes.
    3. The technical interview with your future managers β€” 60 minutes.
    4. In case of success β€” offer!

       

    We highly value transparency and communication. Our recruitment process includes feedback at every stage, ensuring you are consistently informed and engaged!

    More
  • Β· 22 views Β· 0 applications Β· 13d

    Field Engineer

    Part-time Β· Hybrid Remote Β· Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Profile Overview Position: Field Engineer Location: Ukraine (border crossing point & system data centers) Reporting to: Client’s Project Manager Language requirements: Ukrainian and English (bilingual or fluent in Ukrainian with good English...

    Profile Overview

    Position: Field Engineer 
    Location: Ukraine (border crossing point & system data centers)
    Reporting to: Client’s Project Manager
    Language requirements: Ukrainian and English (bilingual or fluent in Ukrainian with good English comprehension)
    Duration:

    Implementation phase: Months 7–9 (approx. 3 months, full-time on-site)

    Warranty phase: Months 10–34 (24 months, part-time – on demand)

    Main purpose of the role:
    To act as the local technical executor and liaison between client remote delivery team and the SBGS operational environment.
    The Field Engineer ensures that deployment, configuration, pilot testing, and first-line support are properly carried out on site, in coordination with client's DevOps, QA, and Project Management teams.

    2. Responsibilities During Implementation Phase

    Estimated duration: Months 7–9
    Total effort: ~45–50 person-days (approx. full-time during deployment month)

    2.1 Pre-Deployment Preparation (Month 7)

    Coordinate with IT local system to verify hardware readiness (servers, network, tablets).

    Validate local connectivity between BCP environment and central system infrastructure.

    Prepare installation prerequisites and system access credentials.

    Participate in pre-deployment briefings with client DevOps and PM.

    Ensure all software installation packages and documentation are available on site.

    2.2 System Deployment at the Pilot Border Crossing Point (Month 8)

    Execute software installation following client deployment scripts and procedures.

    Configure servers, network interfaces, and database connections.

    Verify connectivity to external systems (system A and APIs).

    Conduct smoke tests with client`s QA and DevOps teams.

    Assist with installation and testing of the tablet-based system UI.

    Provide daily progress and issue logs to the client PM.

    Coordinate on-site troubleshooting and escalations (Level 1 support).

    2.3 Pilot Operation and User Training (Month 8–9)

    Support daily pilot operations; monitor system logs and performance.

    Document any incidents or anomalies and escalate to Level 2 support.

    Assist trainers during administrator and user sessions (in Ukrainian).

    Help collect user feedback and translate it into technical observations.

    Participate in pilot evaluation and acceptance activities.

    2.4 Acceptance and Handover (Month 9)

    Support acceptance testing and sign-off sessions

    Confirm proper closure of deployment tickets and system stability.

    Handover operational documentation and local configuration notes to system administrators.

    3. Responsibilities During Warranty and Support Period

    Duration: 24 months (Months 10–34)
    Effort: ~10 person-days per month (part-time / on demand)

    3.1 Level 1 Support

    Act as the first point of contact for local incidents and user issues.

    Collect incident details, logs, and screenshots; create service tickets.

    Perform basic troubleshooting and re-checks before escalation to client Level 2.

    Verify deployment status after patches or updates from client teams.

    3.2 Preventive Maintenance

    Perform scheduled checks of application availability, network connectivity, and tablet functionality.

    Verify that logging, audit, and backup mechanisms are active.

    Ensure SIEM event forwarding continues to operate correctly.

    Assist SBGS IT staff in performing recovery or restart operations if required.

    3.3 Communication & Reporting

    Maintain a local log of activities and incidents.

    Provide a monthly status report to the client PM, summarizing:

    Tickets raised/resolved.

    Local user feedback.

    Infrastructure or network issues observed.

    Participate in quarterly coordination calls with client and IOM.

    3.4 Warranty Checkpoints

    Support mid-term warranty review (Month 12) and final warranty review (Month 24).

    Assist in confirming system stability and readiness for long-term transition.

    4. Skills and Competencies Required

    Technical background in IT systems administration or software deployment.

    Experience with Windows Server, networking, and Oracle Database client configuration.

    Basic understanding of REST APIs and web application architectures.

    Familiarity with logging and monitoring tools (ELK, Prometheus, etc.).

    Good communication and documentation skills.

    Ability to work independently under remote supervision.

    Previous experience in UN / public sector projects is an advantage.

    5. Monthly Effort Overview

    Phase Month(s) Effort (person-days)
    Description
    Pre-deployment
    Month 7
    10
    Coordination, environment readiness
    Deployment & Pilot
    Month 8
    25
    Installation, configuration, pilot support
    Acceptance
    Month 9
    10
    Acceptance testing, handover
    Warranty (L1 Support)
    Months 10–34
    10 / month
    Preventive & corrective support on demand

    More
  • Β· 65 views Β· 1 application Β· 13d

    Junior Data Engineer

    Full Remote Β· Ukraine Β· 2 years of experience Β· B2 - Upper Intermediate
    Description The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health...

    Description

    The Digital Health organization is technology team which focused on next generation Digital Health capabilities which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.

     

    The project is a cloud-based PaaS Ecosystem built with a privacy by design centric approach to provide a centralized cloud-based platform to store, classify, and control access to federated datasets in a scalable, secure, and efficient manner.

    The ecosystem will allow Customer Operating Units (medical device departments) to store federated data sets of varying sizes and formats and control access to those data sets through Data steward(s). Source data sets can be exposed to qualified use cases and workflows through different project types.

    The Healthcare Data Platform ecosystem will provide ML/AI project capabilities for streamlined development processes and a ML/AI workbench to enhance data exploration, wrangling, and model training.

    In queue: 15+ OU’s. At this moment focused on – Nuero, Cardio, Diabetes is the OU that data platform is working with, but there could be more OU’s coming up with requirements in future.

    GL Role: is to work on the enhancement of current capabilities, including taking over the work that AWS proserve team is doing, and develop new requirements that will keep coming from different OU’s in the future.

    Requirements

    • Python, Data Engineering, Data Lake or Lakehouse, Apache Iceberg (nice to have), Parquet
    • Good communication skills, pro-active/initiative

     MUST HAVE

    • AWS Platform: Working experience with AWS data technologies, including S3, AWS RDS, Lake Formation
    • Programming Languages: Strong programming skills in Python
    • Data Formats: Experience with JSON, XML and other relevant data formats
    • CI/CD Tools: Ability to deploy using established CI/CD pipelines using GitLab CI, Jenkins, Terraform or similar tools
    • Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    • Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, Splunk, ELK, Dynatrace, Prometheus
    • Source Code Management: Expertise with GitLab
    • Documentation: Experience with markdown and in particular Antora for creating technical documentation

    NICE TO HAVE

    • Previous Healthcare or Medical Device experience
    • Experience implementating enterprise grade cyber security & privacy by design into software products
    • Experience working in Digital Health software
    • Experience developing global applications
    • Strong understanding of SDLC; experience with Agile methodologies
    • Software estimation
    • Experience leading software development teams onshore and offshore
    • Experience with FHIR

    Job responsibilities

     

    • Implement data pipelines using AWS services such as AWS Glue, Lambda, Kinesis, etc
    • Implement integrations between the data platform and systems such as Atlan, Trino/Starburst, etc
    • Complete logging and monitoring tasks through AWS and Splunk toolsets
    • Develop and maintain ETL processes to ingest, clean, transform and store healthcare data from various sources
    • Optimize data storage solutions using Amazon S3, AWS RDS, Lake Formation and other AWS technologies.
    • Document, configure, and maintain systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    • Participate in planning of system and development deployment as well as responsible for meeting compliance and security standards.
    • Actively identify system functionality or performance deficiencies, execute changes to existing systems, and test functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    • Document testing and maintenance of system updates, modifications, and configurations.
    • Leverage platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
    • Test the quality of a product and its ability to perform a task or solve a problems.
    • Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
    • Ensure system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)
    More
  • Β· 47 views Β· 1 application Β· 13d

    Data Engineer(dbt)

    Full Remote Β· EU Β· 3 years of experience Β· B2 - Upper Intermediate
    We’re looking for a skilled Data Engineer for a major modernization and migration project with a large Retail customer. You’ll work remotely as part of a collaborative Agile team and coordinate closely with a hands-on Data...

    We’re looking for a skilled Data Engineer for a major modernization and migration project with a large Retail customer.
    You’ll work remotely as part of a collaborative Agile team and coordinate closely with a hands-on Data Architect.

    Responsibilities:

    • Develop and maintain robust data models and transformation logic using dbt
    • Validate and test migrated datasets within Snowflake to ensure accuracy and consistency
    • Collaborate closely with the Data Architect on solution design and technical implementation
    • Work cross-functionally with Front End Developers, Analysts/Testers, and key business stakeholders
    • Support the migration from SAP BW/4HANA and SAC to the new platform

    Requirements:

    • 3+ years of experience in Data Engineering or BI projects
    • Hands-on experience developing data models and transformation logic using dbt
    • Experience working with Snowflake or similar cloud data warehouses (DWH) such as Google BigQuery, Amazon Redshift
    • Strong skills in writing and optimizing SQL queries, and validating transformation results
    • Experience in building data models and reports in Power BI

    Nice-to-Have:

    • Familiarity with Azure DevOps or other Git-based CI/CD tools
    • Knowledge of SAP HANA (as a legacy data source)
    • Understanding of or experience with SAP Analytics Cloud

    What we offer:

    • Annual paid leave 18 working days
    • Annual paid sick leave
    • Opportunity to participate in various events (educational programs, seminars, training sessions)
    • English courses
    • Healthcare program
    • IT Cluster membership
    • Competitive compensation
    More
  • Β· 50 views Β· 7 applications Β· 1d

    Senior Data Engineer (with ETL/Talend expertise)

    Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Are you a skilled Senior Data Engineer (with ETL/Talend expertise) looking for a new challenge? Our team is seeking an experienced Talend Developer who is passionate about data integration and ETL processes. Join us to drive innovative projects and make a...

    Are you a skilled Senior Data Engineer (with ETL/Talend expertise) looking for a new challenge? Our team is seeking an experienced Talend Developer who is passionate about data integration and ETL processes. Join us to drive innovative projects and make a significant impact. Apply now!

     

    Innovecs is a global digital transformation tech company with a presence in the US, the UK, the EU, Israel, Australia, and Ukraine. Specializing in software solutions, the Innovecs team has experience in Supply Chain, Healthtech, Software & Hightech, and Gaming.
    For the fifth year in a row, Innovecs is included in the Inc. 5000 and recognized in IAOP’s ranking of the best global outsourcing service providers. Innovecs is featured in the Global Top 100 Inspiring Workplaces Ranking and won gold at the Employer Brand Management Awards.
            

    Requirements:

    • Bachelor's degree in Computer Science, Information Technology, or related field.
    • Strong experience in Data warehousing using ETL Talend Data Integration tool.
    • Talend Big Data platform, Snowflake.
    • Experience designing and delivering complex, large-volume data warehouse applications.
    • Senior Level Talend ETL development (4+ years of hard-core Talend experience).
    • Strong experience in SQL programming.
    • Confident user of GenAI tools for productivity (Copilot, Cortex).
    • It's a plus if you have Talend and Snowflake Certifications.

     

    Responsibilities:

    • Design, develop, and maintain ETL processes using Talend to extract, transform, and load data from various sources into data warehouses or other storage systems.
    • Integrate data from different sources such as databases, APIs, flat files, and cloud services into a unified data repository
    • Ensure data accuracy, consistency, and integrity by implementing data quality checks and cleansing processes.
    • Diagnose and resolve issues related to ETL processes and data integration, providing support to end-users and other stakeholders.
    • Stay updated with the latest features and best practices in Talend and data integration technologies, continuously improving processes and methodologies.

     

    Our value to you: 

    • Flexible hours and remote-first mode
    • Competitive compensation
    • Complete Hardware/Software setup – anything you need for work
    • Open-door culture, transparent communication, and top management at a handshake distance
    • Health insurance, vacation, sick leaves, holidays, paid maternity/paternity leave
    • Access to our learning & development center: workshops, webinars, training platform, and edutainment events
    • Virtual team buildings and social activities 
    More
  • Β· 23 views Β· 0 applications Β· 13d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to...

    Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 39 views Β· 2 applications Β· 13d

    Data Engineer

    Hybrid Remote Β· Ukraine (Kyiv, Lutsk) Β· Product Β· 2 years of experience Β· B1 - Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Jooble is a global technology company. Our main product jooble.org is an international job search website in 66 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites...

    Jooble is a global technology company. Our main product jooble.org is an international job search website in 66 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites in the Jobs and Employment segment worldwide. Since 2006, we’ve grown from a small startup founded by two students into a major player in the online recruitment market with 300+ professionals. Where others see challenges, we create opportunities.

     

    What You'll Be Doing

    • Design & Build Pipelines: Develop, and maintain robust and scalable ETL/ELT pipelines, moving data from diverse sources into our data warehouse.
    • Ensure Data Quality & Observability: Implement a comprehensive data observability strategy, including automated quality checks, monitoring, and lineage tracking to ensure data is accurate and trustworthy.
    • Optimize & Automate: Write clean, efficient code to automate data processing and continuously optimize our data storage strategies and query performance.
    • Govern & Document: Contribute to our data governance practices and maintain clear documentation for data processes, models, and architecture in our data catalog.
       

    What We're Looking For Core Requirements

    • Experience: 2+ years of hands-on experience in a data engineering role.
    • Core Languages: Strong proficiency in SQL (including complex queries and optimization) and Python for data processing.
    • Databases: Practical experience with relational databases, specifically PostgreSQL and MSSQL.
    • ETL/ELT: Proven experience designing and building pipelines using modern data orchestrators like Airflow or Dagster.
    • Data Modeling: A solid understanding of data warehousing concepts and data modeling techniques (e.g., dimensional modeling).
    • Ukrainian proficiency level: Upper Intermediate and higher (spoken and written)

     

    Bonus Points (Strongly Desired)

    • Streaming Data: Hands-on experience with streaming technologies like Kafka, Debezium, or message queues like RabbitMQ.
    • Specialized Databases: Experience with MPP databases (Greenplum/CloudberryDB) or columnar stores (ClickHouse).
    • Modern Data Stack: Familiarity with tools like dbt, Docker.
    • Basic knowledge of a cloud platform like AWS, GCP, or Azure.
    • A demonstrable interest in the fields of AI and Machine Learning.

    Our Tech Stack Includes

    • Observability & BI: DataHub, Grafana, Metabase
    • Languages: Python, SQL
    • Databases: PostgreSQL, MSSQL, ClickHouse, Greenplum/CloudberryDB
    • Orchestration: Airflow, Dagster
    • Streaming & Messaging: Kafka, Debezium, RabbitMQ

     

    Why Jooble?

    Work format
    Flexibility is not just a word for us. In Kyiv, we work in a hybrid format, combining office and remote work. In other cities and countries, you can work fully remotely. No matter where you are, we’ll make sure you feel comfortable and productive by providing all the necessary equipment.

    Schedule
    You decide when to start your 8-hour workday β€” anytime between 8:00 and 10:00 a.m. Kyiv time. The key is to stay connected and plan time for team meetings. We value the freedom to organize your day while believing that a shared rhythm helps us work more efficiently.

    Growth and development
    At Jooble, everyone has a dedicated budget for personal and professional development. It’s your space to gain new knowledge and skills that help you grow β€” and make the company stronger at the same time.

    Physical and mental health
    Health is an essential part of both work and life. At Jooble, we offer full medical insurance (after 3 months of employment), and for colleagues abroad β€” financial support for medical expenses. We also cover consultations with psychologists through our wellbeing service or reimburse 50% of the cost if you choose your own specialist.

    Rest and recovery
    You’ll have 24 working days of annual paid vacation + 6 additional recharge days. 20 paid sick days + 4 days without medical confirmation β€” so you can take time to recover when needed. In addition, you’ll get 6 fixed public holidays off.

    Team
    At Jooble, you’ll work alongside strong professionals and experts who grow together and create global-level solutions. Everyone has a voice and can influence processes. We value honest feedback and look for people who share our values.

    Support for Ukraine
    At Jooble, we preserve jobs for mobilized colleagues. As a team, we actively participate in initiatives that support Ukraine, and our product is designed to help even more people find jobs β€” especially during challenging times.

    More
  • Β· 20 views Β· 2 applications Β· 13d

    Senior DevOps / Cloud Architect

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Workload: full-time Start: ASAP Duration: long-term Project: High-throughput real-time data ingestion & processing platform (multi-region, multi-cloud, low-latency). Goal: Build infrastructure from scratch for an enterprise-grade...

    Workload: full-time
    Start: ASAP
    Duration: long-term

    Project: High-throughput real-time data ingestion & processing platform (multi-region, multi-cloud, low-latency).
    Goal: Build infrastructure from scratch for an enterprise-grade system.

    Important!!!
    Experience with enterprise systems is a must-have, and deep knowledge of networking, cloud networking, and k8s is a must.

    Requirements:
    Strong cloud networking (DNS, LB, TCP/UDP, connection management).
    Deep Kubernetes knowledge: CNI (Calico/Cilium), Ingress (NGINX/Envoy), Service Mesh (Istio/Linkerd).
    Strong CI/CD & IaC (ArgoCD, Terraform, Helm, Jenkins, Ansible).
    Streaming systems: Kafka, NATS, Pulsar, RabbitMQ.
    Multi-region / multi-cloud (at least AWS + one more).
    Security & monitoring: TLS/mTLS, IAM, WAF, Prometheus, Grafana.
    Hands-on mindset + ownership attitude.
    English: Upper-Intermediate+.

    Nice to have: QUIC/HTTP3, Anycast routing, Iceberg, and distributed systems experience.

    More
  • Β· 50 views Β· 4 applications Β· 13d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and automation to...

    JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and automation to deliver next-generation business tools. Our platform supports multi-tenant AI agent workflows, real-time lead processing, and deep analytics pipelines.

    We are seeking an experienced Data Engineer with deep Google Cloud Platform (GCP) experience to lead our data lake, ingestion, observability, and compliance infrastructure. This role is critical to building a production-grade, metadata-aware data stack aligned with SOC2 requirements.

    What You'll Do

    Data Architecture & Lakehouse Design

    • Architect and implement a scalable GCP-based data lake across landing, transformation, and presentation zones.
    • Use native GCP services such as GCS, Pub/Sub, Apache Beam, Cloud Composer, and BigQuery for high-volume ingestion and transformation.
    • Design and implement infrastructure landing zones using Terraform with strong IAM boundaries, secrets management, and PII protection.
    • Build ingestion pipelines using Apache NiFi (or equivalent) to support batch, streaming, and semi-structured data from external and internal systems.

    Data Ingestion & Integration

    • Develop robust ingestion patterns for CRM, CDP, and third-party sources via APIs, file drops, or scraping.
    • Build real-time and batch ingestion flows with schema-aware validation, parsing, and metadata handling.
    • Implement transformation logic and ensure staging β†’ curated flow adheres to quality, performance, and lineage standards.

    Metadata & Lineage Management

    • Define and enforce metadata templates across all sources.
    • Establish data lineage tracking from ingestion to analytics using standardized tools or custom solutions.
    • Drive schema mapping, MDM support, and data quality governance across ingestion flows.

    SRE & Observability for Data Pipelines

    • Implement alerting, logging, and monitoring for all ingestion and transformation services using Cloud Logging, Cloud Monitoring, OpenTelemetry, and custom dashboards.
    • Ensure platform SLAs/SLOs are tracked and incidents are routed to lightweight response workflows.
    • Support observability for cloud functions, GKE workloads, and Cloud Run-based apps interacting with the data platform.

    Security & Compliance

    • Enforce SOC2 and PII compliance controls: IAM policies, short-lived credentials, encrypted storage, and access logging.
    • Collaborate with security teams (internal/external) to maintain audit readiness.
    • Design scalable permissioning and role-based access for production datasets.

    What We're Looking For

    Core Experience

    • 4+ years in data engineering or architecture roles with strong GCP experience.
    • Deep familiarity with GCP services: BigQuery, Pub/Sub, Cloud Storage, Cloud Functions, Dataflow/Apache Beam, Composer, IAM, and Logging.
    • Expertise in Apache NiFi or similar ingestion/orchestration platforms.
    • Experience with building multi-environment infrastructure using Terraform, including custom module development.
    • Strong SQL and schema design skills for analytics and operational reporting.

    Preferred Skills

    • Experience in metadata management, MDM, and schema evolution workflows.
    • Familiarity with SOC2, GDPR, or other data compliance frameworks.
    • Working knowledge of incident response systems, alert routing, and lightweight ITSM integration (JIRA, PagerDuty, etc.).
    • Experience with data lineage frameworks (open-source or commercial) is a plus.
    • Exposure to graph databases or knowledge graphs is a plus but not required.

    Why Join Us

    • Help design a full-stack, production-grade data infrastructure from the ground up.
    • Work in a fast-paced AI-driven environment with real product impact.
    • Contribute to a platform used by automotive dealerships across North America.
    • Be part of a high-trust, hands-on team that values autonomy and impact.
    More
  • Β· 99 views Β· 25 applications Β· 12d

    Data Engineer

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· B1 - Intermediate
    We’re looking for a Data Engineer who will design, build, and maintain reliable data pipelines, ensuring high data quality across multiple internal and external systems. You’ll work closely with product, analytics, and engineering teams to develop robust...

    We’re looking for a Data Engineer who will design, build, and maintain reliable data pipelines, ensuring high data quality across multiple internal and external systems. You’ll work closely with product, analytics, and engineering teams to develop robust ETL processes and support data-driven decision-making.
     

    Responsibilities:

    • Build and maintain ETL processes integrating data from various internal and external IT systems.
    • Design and implement efficient orchestration for ETL pipelines (Airflow/Prefect/dbt).
    • Manage and support webhook ingestion pipelines, ensuring reliability and deduplication.
    • Design and optimize SQL data marts for analytics and reporting.
    • Ensure data quality, detect anomalies, and prepare control reports.
    • Perform one-time data loads and backfills when needed.

    Requirements:

    • Strong SQL skills (Postgres or similar): CTEs, window functions, query profiling, and optimization for large tables.
    • Proficiency in Python for production scripts and automation (pandas / pyarrow / requests / asyncio).
    • Hands-on experience in web scraping (Playwright / Selenium / Scrapy), proxy rotation, anti-bot & CAPTCHA bypass, incremental updates.
    • Experience with webhooks: ingestion design, idempotency/deduplication, retries, integrity and latency control.
    • Solid understanding of ETL/ELT orchestration (cron / Airflow / Prefect / dbt); incremental loads, monitoring, and notifications.
    • Working with APIs (REST / GraphQL) and data formats (JSON / CSV / Parquet).
    • Strong Data Quality mindset: validation tests, reconciliation, data contracts, troubleshooting financial metric discrepancies (bets / wins / GGR).

    Nice to Have:

    • Experience implementing data quality metrics and data contracts (consistency, completeness).
    • Hands-on with Spark (PySpark), Airflow, S3, and data profiling tools (ydata-profiling / Jupyter).
    • Experience setting up monitoring and logging (Grafana).
    • Familiarity with popular formats: Parquet / CSV / JSON / Iceberg.
    • Experience working with BigQuery.
    • Basic BI tools knowledge (Power BI / Tableau / Metabase) for dashboard creation.

    What We Offer:

    • Remote-first work format.
    • Flexible working hours.
    • Opportunity to be part of a rapidly growing iGaming product.
    More
  • Β· 71 views Β· 4 applications Β· 12d

    BI/Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re a product powerhouse building a full-stack ecosystem for iGaming businesses. 40M+ players. 250 brilliant minds. One bet: our technology is so rock-solid that we stake our own business on it. From our hubs in Ukraine, Georgia, the UK, and the...

    We’re a product powerhouse building a full-stack ecosystem for iGaming businesses. 40M+ players. 250 brilliant minds. One bet: our technology is so rock-solid that we stake our own business on it.

    From our hubs in Ukraine, Georgia, the UK, and the Philippines, we blend real-world experience, a battle-tested Tech Radar, all within an open-door culture. 

    TURNKEY 
    Turnkey isn’t just our biggest team β€” it’s the engine room of everything we build.

    It’s the tech our partners bet their business on: no patchwork, no plug-ins, and built for the real world. Just one battle-tested ecosystem designed to launch, grow, and lead in the most dynamic markets out there.

    Turnkey brings together VeliHorizon (our core platform), VeliX (the experience layer), and VeliPayments β€” a trio that powers every player's journey and operator’s next big leap.

    We invite a BI/Data Engineer to join the VeliHorizon team within Turnkey.

    VeliHorizon β€” our battle-tested platform built from real gaming floors. High-performance, microservices at its core, API-first by design β€” it’s the tech that keeps operations smooth, data razor-sharp, and transactions flying.

    Every click, spin, and payout runs through Horizon β€” proven in the toughest conditions, powering thousands of moments a second. Built by cross-functional teams who own every detail β€” from first idea to live performance.

    In this role, you will:

    • Maintain and further develop the corporate DWH
    • Monitor and optimise PostgreSQL and ClickHouse performance
    • Develop and support data pipelines in Airflow (Python + SQL)
    • Set up and maintain logical replication between systems
    • Build and maintain reports in Tableau and Metabase
    • Contribute to client reporting solutions on BigQuery

       

    Skills and experience you will need:

    • At least 4 years of proven experience in a Data Engineer role
    • Hands-on experience with PostgreSQL as a DWH (query optimisation, indexing, partitioning)
    • Practical experience with ClickHouse
    • Strong SQL skills (Postgres, BigQuery, analytical functions)
    • Experience designing and supporting ETL processes, knowledge of Airflow
    • Experience with BI/reporting tools (Tableau, Metabase, or similar)
    • Solid Python skills for developing ETL processes and Airflow DAGs
    • Ability to write clean, maintainable code (error handling, logging, structure)
    • Experience with database libraries (psycopg2/sqlalchemy, clickhouse-driver, etc.)
    • Spoken English at B1–B2+ level

       

    Would be a plus:

    • Experience with BigQuery
    • Familiarity with Superset
    • Experience writing unit tests for data pipelines

       

    HIRING PROCESS: Intro Call with TA Specialist β†’ Technical Interview β†’ Final interview β†’ Offer

    We offer:

    • Level up daily. Real mentorship, a clear career path, and support to lead your product.
    • Battle-tested tech stack. Work with what we bet on: microservices, serverless, SRE strength.
    • Health comes first. Insurance and 10 days' sick leave β€” because your health is a priority.
    • Work your way. Remote, hybrid, Kyiv office β€” find your rhythm.
    • Time off that matters. 20 days paid vacation, public holidays β€” to recharge your way.
    • Build your brand. Share your story, grow your voice β€” inside and out.
    • Culture with a pulse. Team offsites, community events, and the energy of people who care.

       

     

    More
  • Β· 21 views Β· 1 application Β· 12d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Description Our client is a global technology and manufacturing company with a long history of innovation across multiple industries, including industrial solutions, worker safety, and consumer goods. Headquartered in the United States, the company...

    Description

    Our client is a global technology and manufacturing company with a long history of innovation across multiple industries, including industrial solutions, worker safety, and consumer goods. Headquartered in the United States, the company develops and produces a wide range of products – from adhesives, abrasives, and protective materials to personal safety equipment, electronic components, and optical films. With tens of thousands of products in its portfolio and operations in markets around the world, it plays a key role in delivering high-quality, reliable solutions for both businesses and consumers.

    Requirements

    We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.

    Required Qualifications & Skills

    • 5+ years of professional experience in data engineering or a related role.
    • A minimum of 3 years of deep, hands-on experience using Python for data processing, automation, and building data pipelines.
    • A minimum of 3 years of strong, hands-on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
    • Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
    • Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
    • Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
    • Proficiency in data quality assessment and improvement techniques.
    • Experience working with and cleansing a variety of data formats, including unstructured and semi-structured data (e.g., CSV, JSON, Parquet, XML).
    • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
    • Excellent problem-solving skills and the ability to communicate complex technical concepts effectively to both technical and non-technical audiences.

    Preferred Qualifications & Skills

    • Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
    • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience with consuming data from REST APIs.
    • Experience with database design, optimization, and performance tuning for software application backends.
    • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
    • Familiarity with modern data architecture concepts such as Data Mesh.
    • Real-world experience supporting and troubleshooting critical, end-to-end production data pipelines.

    Job responsibilities

    Key Responsibilities

    • Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
    • End-to-End Data Solutions: Architect and implement end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
    • Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
    • Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
    • Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost-efficiency, ensuring our systems can handle growing data volumes.
    • Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid-level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
    • Stakeholder Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements, provide technical solutions, and deliver actionable data products.
    • System Maintenance: Support and troubleshoot production data pipelines, identify root causes of issues, and implement effective, long-term solutions.
    More
Log In or Sign Up to see all posted jobs