Jobs Data Engineer

147
  • Β· 40 views Β· 6 applications Β· 1d

    Senior AI Data Engineer (Python) to $8000

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - C2
    About Pulse Intelligence Pulse Intelligence is building the definitive data platform for the global mining industry. We aggregate, process, and enrich data from hundreds of sources (regulatory filings, stock exchanges, company websites, news, and...

    About Pulse Intelligence

    Pulse Intelligence is building the definitive data platform for the global mining industry. We aggregate, process, and enrich data from hundreds of sources (regulatory filings, stock exchanges, company websites, news, and financial APIs) to give mining investors and analysts a real-time, comprehensive view of every mining asset, company, and commodity on the planet.

     

    Our platform combines large-scale web scraping with LLM-powered data extraction to turn unstructured documents (NI 43-101 technical reports, RNS announcements, SEDAR filings) into structured, queryable intelligence. We're a small team shipping fast, and every engineer has an outsized impact on the product.

     

    About the Role

    We're looking for a Senior AI Data Engineer to take ownership of our entire data pipeline, from raw document ingestion through AI-powered extraction to clean, structured records in our database. You'll be the technical lead on data acquisition and enrichment: architecting scrapers for new sources, designing LLM extraction strategies, making decisions on data modeling, and driving the quality and coverage of our mining asset database.

     

    This is a high-autonomy role for someone who can see the big picture and execute on the details. You'll decide which data sources to prioritise, how to structure extraction pipelines, and when to invest in automation vs. manual curation. You'll ship scrapers one day, redesign an entity extraction pipeline the next, and mentor the team on best practices throughout.

     

    What You'll Do

    • Own data acquisition and scraping - identify, prioritise, and build scrapers for new data sources (exchanges, regulatory filings, company websites, financial APIs) and scale them to run reliably in production
    • Design LLM extraction pipelines - architect and iterate on prompt-driven pipelines that extract structured mining data (assets, production, reserves, companies) from unstructured documents
    • Build the document processing pipeline - take raw PDFs, HTML, and filings from ingestion through to clean, structured data using OCR, parsing, deduplication, and text normalisation
    • Drive data quality and coverage - design verification, deduplication, and enrichment workflows, and own the data model that keeps our mining asset database accurate and well-structured
    • Keep pipelines running - monitor scheduled jobs, design for failure recovery, and ensure the system scales without manual intervention

     

    What You Need

    • 5+ years of Python in data engineering or backend development
    • Web scraping at scale - you've built and maintained production scrapers (Scrapy, Playwright, Selenium, or similar)
    • Prompt engineering - you've used LLM APIs (OpenAI, Anthropic, or similar) to extract structured data from unstructured text, and you iterate on prompts systematically
    • Strong SQL and data modeling - you've designed schemas and optimised queries in PostgreSQL or similar
    • Self-directed - you identify what needs doing and drive it to completion with minimal oversight

     

    Nice to Haves

    • Mining or resources industry knowledge (NI 43-101, JORC, resource classifications)
    • AWS (S3, EKS) or similar cloud infrastructure
    • LLM self-verification, chain-of-thought, or agentic pipelines
    • Experience with workflow orchestration tools (Airflow, Dagster, or similar)
    • Experience mentoring engineers or leading a small data team

     

    Benefits

    • Work on a product that maps the entire global mining industry
    • Small team - your work directly shapes the product
    • Remote-friendly with flexible hours
    • Equity in a growing platform

     

    Hiring Process

    • Introductory call - 30 minutes
    • Take-home challenge - 6 hours
    • Technical & cultural fit interview - 1 hour
    • System design interview - 1 hour
    • Final chat with CEO - offer within 48 hours
    More
  • Β· 11 views Β· 3 applications Β· 1d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - C1
    We’re looking for a Senior Data Engineer to join a SaaS company working with large-scale web data and consumer insights for global brands. Details: Format: Full-time, Remote Duration: 6 months (with possible extension) Start: ASAP English: Fluent ...

    We’re looking for a Senior Data Engineer to join a SaaS company working with large-scale web data and consumer insights for global brands.

     

    Details
    Format: Full-time, Remote

    Duration: 6 months (with possible extension)

    Start: ASAP 
    English: Fluent 

     

    Requirements:

    - 5+ years of Data Engineering experience

    - Strong PySpark, Python, SQL

    - Hands-on experience with AWS

    - Practical experience with Databricks (critical)

    - Understanding of production-grade data pipelines and data quality

    - Confident English for cross-team collaboration

     

    Nice to have:

    - MLflow / LLM exposure

    - Big Data & Data Lake architectures

    - CI/CD, DevOps experience

    More
  • Β· 29 views Β· 2 applications Β· 1d

    Senior Data Engineer (Batch and Streaming)

    Full Remote Β· Countries of Europe or Ukraine Β· 2.5 years of experience Β· English - B2
    Role Overview We are building a greenfield analytics platform supporting both batch and real-time data processing. We are looking for a Senior Data Engineer who can design, implement, and evolve scalable data systems in AWS. This role combines hands-on...

    Role Overview

    We are building a greenfield analytics platform supporting both batch and real-time data processing. We are looking for a Senior Data Engineer who can design, implement, and evolve scalable data systems in AWS.

    This role combines hands-on development, architectural decision-making, and platform ownership.

     

    Core Responsibilities

    • Design and implement batch and streaming data pipelines using Apache Spark.
    • Build and evolve a scalable AWS-based data lake architecture.
    • Develop and maintain real-time data processing systems (event-driven pipelines).
    • Own performance tuning and cost optimization of Spark workloads.
    • Define best practices for data modeling, partitioning, and schema evolution.
    • Implement monitoring, observability, and data quality controls.
    • Contribute to infrastructure automation and CI/CD for data workflows.
    • Participate in architectural decisions and mentor other engineers.

       

    Required Qualifications

     

    Experience

    • 5+ years of experience in Data Engineering.
    • Strong hands-on experience with Apache Spark (including Structured Streaming).
    • Experience building both batch and streaming pipelines in production environments.
    • Proven experience designing AWS-based data lake architectures (S3, EMR, Glue, Athena).

       

    Streaming & Event-Driven Systems

    • Experience with event streaming platforms such as Apache Kafka or Amazon Kinesis.

       

    Data Architecture & Modeling

    • Experience implementing lakehouse formats such as Delta Lake.
    • Strong understanding of partitioning strategies and schema evolution.

       

    Performance & Reliability

    • Experience using SparkUI and AWS CloudWatch for profiling and optimization.
    • Strong understanding of Spark performance tuning (shuffle, skew, memory, partitioning).
    • Proven track record of cost optimization in AWS environments.

       

    DevOps & Platform Engineering

    • Experience with Docker and CI/CD pipelines.
    • Experience with Infrastructure as Code (Terraform, AWS CDK, or similar).
    • Familiarity with monitoring and observability practices.
    More
  • Β· 23 views Β· 1 application Β· 1d

    Senior Data Engineer

    Full Remote Β· EU Β· Product Β· 4 years of experience Β· English - B2
    Equals 5 is a Healthcare Marketing SaaS for Pharma and Life Sciences. Our platform leverages exclusive NPI-level targeting technology to help brands reach healthcare professionals across 20+ channels with precise, user-level reporting. We are looking for...

    Equals 5 is a Healthcare Marketing SaaS for Pharma and Life Sciences. Our platform leverages exclusive NPI-level targeting technology to help brands reach healthcare professionals across 20+ channels with precise, user-level reporting.

    We are looking for a Senior Data Engineer to join the Identity team. This is not a standard ETL role. We are building a dynamic data ecosystem where AI is deeply integrated - both as a productivity multiplier and as a core component of our data processing logic for identity data enrichment and data scoring.

    You will own the infrastructure that handles over 10,000 executions per minute, ensuring stability, scalability, and data integrity. You will work with a modern stack on Google Cloud Platform, utilizing Cloud Functions, Kubernetes, and a highly advanced N8N implementation (up to 20M executions per 24 hours).

     

    Responsibilities

    • Design and implement pipelines that utilize LLMs to analyze and score identity data in real-time. Integrate AI models directly into the decision-making loop, balancing accuracy with latency and cost.
    • Architect scalable data solutions using GCP, Python, and N8N. Decide how to route, process, and store massive volumes of identity data. Manage storage with BigQuery and Apache Iceberg for TBs of data.
    • Maintain and optimize N8N instances - complex dataflows, custom Python nodes, and performance tuning for 10k+ execs/minute.
    • Manage PostgreSQL performance under heavy load, optimizing complex queries and indexing strategies.
    • Utilize Apache Spark for data transformations and batch processing when lightweight cloud functions are not enough.
    • Proactively monitor the system.

     

    Requirements

    • 4-5+ years of experience in Data Engineering or Backend Engineering with a strong data focus
    • Production AI Integration: Experience integrating LLMs (OpenAI, Anthropic, Gemini) into production applications via API
    • Expertise in GCP: Cloud Functions, IAM, Networking, Cloud Run
    • Strong Python: Clean, efficient, and testable code; comfortable building custom logic
    • Kubernetes (K8s): Experience deploying and scaling services in containerized environments
    • PostgreSQL Mastery: Proven ability to handle heavy write/read loads and optimize schemas
    • English: B2+ (Upper-Intermediate) or higher

     

    Nice-to-have: 

    • N8N / Workflow Automation experience at a deep technical level

     

    What We Offer
    - Fully remote with flexible hours (aligned with EU timezones for syncs).
    - Influence on quality strategy across the entire engineering organization.
    - A cross-team role with visibility into every part of the product.
    - AI-first tooling.
    - Claude Code licenses and cutting-edge AI development workflows. 
    - A team with no bureaucracy, decisions are made fast.

    More
  • Β· 8 views Β· 0 applications Β· 1d

    Senior Azure Data Engineer IRC289060

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to...

    Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

     

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 25 views Β· 3 applications Β· 1d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    Our client is a global jewelry manufacturer that is transforming the retail experience with cutting-edge technology. The mission is to provide an exceptional shopping experience to our customers by leveraging data-driven insights and innovative solutions....

    Our client is a global jewelry manufacturer that is transforming the retail experience with cutting-edge technology. The mission is to provide an exceptional shopping experience to our customers by leveraging data-driven insights and innovative solutions. We are looking for a talented Data Engineer to join our dynamic team and help us shape the future of retail, where you will play a critical role in developing and maintaining robust data pipelines in both Azure Synapse and Databricks.

    Joining our team, you will contribute to building a solid foundation for the data infrastructure, supporting our marketing analytics organization's goal of becoming more data-driven and customer-centric. You will collaborate closely with cross-functional teams, helping to drive impactful data product deliveries and optimizing our analytical framework for scalable insights globally.

     

    Responsibilities

     

    • Develop and maintain data products and pipelines in Databricks, Azure Data Factory and Azure
      Synapse Analytics
    • Communicate with stakeholders and users of our data products by understanding their problems and supporting them in their needs
    • Optimize data pipelines for performance and scalability, automating repetitive tasks to improve
      efficiency and reduce the time from data ingestion to actionable insights
    • Implement and maintain data quality processes, including data validation, cleansing, and error
      handling, to ensure high data integrity across all systems
    • Improve existing data integration processes to provide better reliability and robustness
    • Partner with product managers, analysts, and business stakeholders to understand data
      requirements, provide data engineering support, and ensure data is accessible and usable for
      analysis and reporting
    • Stay up-to-date with the latest trends and best practices in data engineering, bringing innovative
      ideas and solutions to improve our data infrastructure and capabilities

     

    Skills Required

    • 4+ years of experience as a Data Engineer, ETL Developer, or similar role
    • Experience with Azure Synapse Analytics
    • Strong knowledge of SQL and Spark SQ
    • Understanding of dimensional data modelling concepts
    • Understanding of streaming data ingestion processes
    • Ability to develop & manage Apache Spark data processing applications using PySpark on Databricks
    • Experience with version control (e.g., Git), DevOps, and CI/CD
    • Experience with Python
    • Experience with Microsoft data platform, Microsoft Azure stack, and Databricks
    • Experience in Marketing, Retail, and Ecom will be a plus.

    Soft Skills:
    β€’ Strong problem-solving skills and the ability to work independently as well as part of a team.
    β€’ Excellent communication skills, with the ability to translate technical concepts into business-friendly language.
    β€’ Detail-oriented with a commitment to delivering high-quality, reliable data solutions.

    More
  • Β· 35 views Β· 7 applications Β· 1d

    Middle/Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    About Digis: Digis is a European IT company with 200+ specialists delivering complex SaaS products, enterprise solutions, and AI-powered platforms worldwide. We ensure transparency, stability, and professional growth opportunities for all our team...

    About Digis:

    Digis is a European IT company with 200+ specialists delivering complex SaaS products, enterprise solutions, and AI-powered platforms worldwide. We ensure transparency, stability, and professional growth opportunities for all our team members.

     

    About The Project:

    This is a consulting company specializing in the implementation of Celonis Execution Management System (EMS) for large US enterprises.

    The project focuses on enterprise-level Celonis implementations, process mining enablement, and data integration across complex business environments.

    Time Zone: USA (working hours till 24 Kyiv time)

    Project type: Long-term delivery pipeline
    Work format: Fully remote

     

    Key Requirements:

    • 3+ years of Data Engineering experience
    • Hands-on experience with Celonis  
    • Π’1+ of English level

     

    Responsibilities:

    • Build and maintain Celonis data models
    • Develop PQL logic
    • Set up and optimize EMS pipelines
    • Integrate enterprise data sources
    • Create transformations for process mining analyses
    • Support consultants during client implementations

     

    Why This Project Is a Great Opportunity:

    • Work on large-scale Fortune 500 digital transformation initiatives
      Contribute to high-impact enterprise programs that optimize core business processes across global organizations
    • Stable, long-term delivery pipeline
      Join a project with continuous enterprise implementations, ensuring stability, predictable workload, and long-term professional growth
    • Embedded team model with direct client collaboration
      Become an integral part of client delivery teams, working closely with senior consultants and stakeholders in real business environments
    • Deep exposure to Celonis EMS and process mining expertise
      Gain hands-on experience with advanced Celonis implementations, PQL logic, and complex enterprise data landscapes, strengthening your niche expertise in process intelligence

     

    If this sounds like the right opportunity for you, apply now! We look forward to discussing this further.

    More
  • Β· 27 views Β· 0 applications Β· 2d

    Senior Data Engineer

    Full Remote Β· Croatia, Poland, Romania, Ukraine Β· 6 years of experience Β· English - B2
    Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to...

    Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

     

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 242 views Β· 14 applications Β· 2d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B1
    We are looking for you! As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate,...

    We are looking for you!

    As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate, and craft state-of-the-art data solutions, we're keen to embark on this journey with you.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 6+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python, SQL;
    • Deep understanding and practical experience with cloud data warehouses (Snowflake, BigQuery, Redshift);
    • Extensive experience building data and ML pipelines;
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Deep understanding of Git and its workflows;
    • Open to collaborating with data scientists and businesses.

    Nice to have:

    • Hands-on experience with Dagster, dbt, Snowflake and FastAPI;
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated);
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.
       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred. A Master's degree or higher is a distinct advantage.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; 
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 178 views Β· 38 applications Β· 2d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    We are looking for you! As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures,...

    We are looking for you!

    As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures, optimizing data pipelines, and enabling intelligent decision-making at scale, we’d love to have you join our global team and shape the next generation of data excellence with us.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 3+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python and data stack (NumPy, Pandas, scikit-learn);
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

    Nice to have:

    • Hands-on experience with Python web stack (Fast API / Flask);
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated).
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.

       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; and
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 27 views Β· 2 applications Β· 2d

    Senior Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format...

    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format with the flexibility to collaborate across international teams.

    At Sigma Software, we deliver innovative IT solutions to global clients in multiple industries, and we take pride in projects that improve lives. Joining us means working with cutting-edge technologies, contributing to meaningful initiatives, and growing in a supportive environment.


    CUSTOMER
    Our client is a leading medical technology company. Its portfolio of products, services, and solutions is at the center of clinical decision-making and treatment pathways. Patient-centered innovation has always been, and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where patients live or what they face. The Customer is innovating sustainably to provide healthcare for everyone, everywhere. 


    PROJECT
    The project focuses on building and maintaining large-scale cloud-based data infrastructure for healthcare applications. It involves designing efficient data pipelines, creating self-service tools, and implementing microservices to simplify complex processes. The work will directly impact how healthcare providers access, process, and analyze critical medical data, ultimately improving patient care.

     

    Responsibilities:

    • Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
    • Build and maintain infrastructure using Terraform for cloud platforms
    • Design and implement large-scale cloud data infrastructure, self-service tooling, and microservices
    • Work with large datasets to optimize performance and ensure seamless data integration
    • Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
    • Discover, analyze, and organize disparate data sources into clean, understandable schemas

     

    Requirements:

    • Hands-on experience with cloud computing services in data and analytics
    • Experience with data modeling, reporting tools, data governance, and data warehousing
    • Proficiency in Python and PySpark for distributed data processing
    • Experience with Azure, Snowflake, and Databricks
    • Experience with Docker and Kubernetes
    • Knowledge of infrastructure as code (Terraform)
    • Advanced SQL skills and familiarity with big data databases such as Snowflake, Redshift, etc.
    • Experience with stream processing technologies such as Kafka, Spark Structured Streaming
    • At least an Upper-Intermediate level of English 
    More
  • Β· 84 views Β· 10 applications Β· 2d

    Analytics Manager

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    We are looking for an experienced Head of Analytics / Analytics Manager to lead the development and maintenance of company-wide reporting, manage core data sources, and drive the implementation of modern analytics and AI-enabled solutions. This...

        We are looking for an experienced Head of Analytics / Analytics Manager to lead the development and maintenance of company-wide reporting, manage core data sources, and drive the implementation of modern analytics and AI-enabled solutions. 
        This role combines hands-on expertise in Power BI, SQL, and Python with ownership of analytics processes, data quality, and a small analytics team. You will play a key role in ensuring reliable management reporting, supporting business growth, and evolving our analytics ecosystem.


    Key Responsibilities


    β€’ Design, build, and maintain Power BI dashboards and reports for company management and business stakeholders ;
    β€’ Own and manage global data sources, including:
     - HRM system ;
     - Time tracking system ;
     - Agent schedules and workforce data ;

    β€’ Fully build and support reporting and data logic for the company’s main project, ensuring data accuracy and consistency ;

    β€’ Implement reporting for new projects, including: 
    - Connecting to new data sources o Integrating data via APIs ;
    - Creating new dashboards and data models; 

    β€’ Develop and improve data models and DAX calculations in Power BI ;

    β€’ Write and optimize SQL queries and data transformations; 

    β€’ Participate in the development of Microsoft Fabric capabilities within existing processes; 

    β€’ Coordinate implementation of ML / forecasting solutions together with external vendors; 

    β€’ Lead and manage a small team: Reporting & Data Analyst and Operational Analyst;
    β€’ Define priorities, distribute tasks, and review results;

    β€’ Ensure documentation, stability, and reliability of reporting solutions;

    • Collect, process, and analyze Customer Experience (CX) data (CSAT, NPS, CES, QA scores, customer feedback, complaints, etc.)
    • Build CX dashboards and analytical views to monitor service quality and customer satisfaction

      Required Qualifications
       
    • Higher education in IT, Computer Science, Mathematics, Finance, or related field;
    • 3+ years of hands-on experience with Power BI;
    • Strong and practical knowledge of DAX;
    • 3+ years of experience with SQL and building complex queries;
    • 1+ year of experience with Python (for data processing / automation / ETL tasks);
    • Experience connecting to external systems via APIs;
    • Solid understanding of data modeling and BI best practices;
    • Experience working with large datasets;
    • English level: B1 or higher;

     

    Nice to Have
     

    • Experience with Microsoft Fabric (Dataflows Gen2, Lakehouse/Warehouse, Notebooks, Pipelines);
    • Exposure to forecasting or machine learning concepts;
    • Experience in BPO / Contact Center / Operations analytics;

     

    What We Offer

     

    • Opportunity to build and shape the analytics function
    • Direct impact on management decision-making
    • Participation in AI-driven analytics transformation
    • Professional growth in a fast-scaling company
    More
  • Β· 19 views Β· 0 applications Β· 2d

    Senior Azure Data Engineer IRC289060

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to...

    Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

     

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 29 views Β· 1 application Β· 2d

    Senior Data Engineer

    Ukraine Β· 5 years of experience Β· English - B2
    Client Our client is a large international manufacturing company listed in the S&P 500 and operating within a complex, multi‑site global environment. As part of a Data Strategy Development initiative, the organization is working to define a clear,...

    Client

     

    Our client is a large international manufacturing company listed in the S&P 500 and operating within a complex, multi‑site global environment. As part of a Data Strategy Development initiative, the organization is working to define a clear, pragmatic, and scalable data strategy aimed at improving decision‑making, reducing manual processes, strengthening data ownership, and laying a solid foundation for future analytics and digital transformation. The engagement is currently focused on strategy definition and is expected to progress into platform and analytics implementation phases.

     

     

    Position overview

    The Senior Data Engineer supports the Data Solutions Architect during the strategy phase by validating technical assumptions and assessing the feasibility of the proposed data architecture. The role focuses on hands-on analysis of existing data assets, data pipelines, and data quality, as well as targeted proofs of concept to reduce technical and delivery risk without full production implementation.

     

    Responsibilities

    • Review and inventory existing data assets across enterprise and operational systems
    • Assess data quality and availability for priority business and analytics use cases
    • Validate feasibility of proposed data architecture and integration approaches
    • Analyze data sourcing options from ERP, manufacturing, logistics, and BI systems
    • Identify technical complexity, risks, and mitigation options
    • Develop lightweight proofs of concept to validate data availability or transformation approaches
    • Provide technical inputs into the implementation roadmap and sequencing

    Requirements

    • 5+ years of experience in data engineering within analytics-focused environments
    • Hands-on expertise with Azure and Snowflake ecosystems
    • Experience assessing existing data pipelines and data quality challenges
    • Strong understanding of data integration, transformation, and modeling concepts
    • Ability to balance hands-on technical work with strategic architecture validation
    • Experience working closely with architects and strategy teams

    Nice to have

    • Experience in manufacturing or industrial data environments
    • Familiarity with SAP ECC / SAP BW, ERP, and operational systems
    • Experience with Power BI datasets, Qlik Sense, Alteryx, ExQL
    • Experience contributing to strategy-phase or pre-implementation initiatives
    More
  • Β· 17 views Β· 0 applications Β· 2d

    Senior Data Solutions Consultant

    Ukraine Β· 3 years of experience Β· English - B2
    Client Our client is a large international manufacturing company listed in the S&P 500, operating a complex, multi-site global environment. As part of a Data Strategy Development initiative, the organization is defining a clear, pragmatic, and scalable...

    Client

    Our client is a large international manufacturing company listed in the S&P 500, operating a complex, multi-site global environment. As part of a Data Strategy Development initiative, the organization is defining a clear, pragmatic, and scalable data strategy to improve decision-making, reduce manual processes, strengthen data ownership, and establish a strong foundation for future analytics and digital transformation. The engagement focuses on strategy definition and is expected to continue into platform and analytics implementation phases.

     

     

    Position overview

    The Data Solutions Consultant provides strategic oversight and expert advisory support throughout the Data Strategy engagement. The role ensures alignment with industry best practices, modern data and analytics trends, and proven transformation frameworks.

    This position acts as a senior expert advisor across architecture, governance, operating model, and value realization topics validating strategic decisions, challenging assumptions, and guiding the team and client stakeholders on complex data transformation questions.

     

    Responsibilities

    • Provide strategic oversight across all Data Strategy workstreams (business, technology, governance, operations)
    • Ensure alignment of the data strategy with industry best practices and modern data & analytics trends
    • Advise on complex data governance and operating model design challenges
    • Validate current-state assessment findings and strategic conclusions
    • Review and challenge proposed architecture, governance, and operating model approaches
    • Validate use case prioritization and value realization logic
    • Advise on change management and adoption approaches for enterprise data programs
    • Support risk identification and mitigation at strategy and model design level
    • Contribute to and shape executive-level Data Strategy messaging and narrative
    • Provide expert input into strategic principles, guardrails, and decision frameworks

    Requirements

    • Extensive experience in data strategy, data governance, and analytics transformation programs
    • Proven advisory or consulting background in enterprise data initiatives
    • Strong knowledge of industry best practices and data transformation frameworks
    • Experience advising senior and executive-level stakeholders
    • Ability to evaluate and validate strategic solution approaches across multiple domains
    • Strong strategic thinking and structured problem-solving skills
    • Experience with enterprise-scale data and analytics operating models

    Nice to have

    • Experience in manufacturing or industrial enterprise environments
    • Background in large-scale data or digital transformation programs
    • Familiarity with Azure, Snowflake, and modern cloud data platforms
    • Experience contributing to executive-level strategy reports and board-level materials
    • Consulting or principal-level advisory experience in global organizations
    More
Log In or Sign Up to see all posted jobs