Jobs Data Engineer

143
  • Β· 196 views Β· 8 applications Β· 5d

    Google Cloud Solutions Architect / Pre-Sales Engineer

    Full Remote Β· Ukraine Β· Product Β· 1 year of experience Β· English - B1
    Google Cloud Solutions Architect / Pre-Sales Engineer Company: Softprom Solutions Location: Remote (Ukraine) Employment: Full-time, FOP (contractor) About the role Softprom Solutions is looking for a Google Cloud Solutions Architect / Pre-Sales...

    Google Cloud Solutions Architect / Pre-Sales Engineer

     

    Company: Softprom Solutions
    Location: Remote (Ukraine)
    Employment: Full-time, FOP (contractor)

     

    About the role

    Softprom Solutions is looking for a Google Cloud Solutions Architect / Pre-Sales Engineer to join our Cloud team.

    This role is ideal for a specialist with hands-on experience in Google Cloud Platform and an active Google Cloud certification, who wants to work on real commercial projects, participate in pre-sales activities, and grow in cloud architecture.

     

    Responsibilities

    • Design and document Google Cloud architectures according to the Google Cloud Architecture Framework
    • Participate in pre-sales activities:
      • technical discovery with customers
      • solution design
      • participation in demos and presentations
    • Deployment and configuration of core GCP services, including:
      • Compute Engine, Cloud Storage, Cloud SQL
      • VPC, IAM, Load Balancing
      • Cloud Functions / Cloud Run
    • Design and configure GCP networking:
      • VPC networks, subnets
      • Firewall rules
      • Routes
    • Implement and support Infrastructure as Code (IaC) using Terraform
    • Create technical and solution documentation
    • Act as a technical point of contact for sales and customers

     

    Requirements (Must have)

    • Active Google Cloud certification
      (Associate Cloud Engineer or Professional Cloud Architect)
    • Experience as Cloud Engineer / Solutions Architect / Pre-Sales Engineer
    • Practical understanding of Google Cloud core services:
      • Compute, Storage, Databases, Networking, Security
    • Solid networking knowledge:
      • TCP/IP (L3, L4)
      • IP addressing, subnetting
      • DNS
    • Understanding of cloud principles:
      • High Availability
      • Fault Tolerance
      • Scalability
      • Security
    • Linux skills (command line)

     

    Nice to have

    • Experience in pre-sales or customer-facing roles
    • Experience with Terraform
    • Basic scripting skills (Python, Bash)
    • Experience with Docker, Kubernetes (GKE), Cloud Run
    • Experience with AWS or multi-cloud environments

     

    Soft skills

    • Ability to communicate technical solutions to non-technical audiences
    • Structured and analytical thinking
    • Proactivity and ownership
    • Ukrainian β€” fluent
    • English β€” reading and basic spoken (technical / pre-sales)

     

    We offer

    • Compensation: $2000 USD + project-based bonuses
    • FOP cooperation
    • Full-time workload
    • Real commercial Google Cloud projects
    • Participation in pre-sales and architecture design
    • Professional growth in Cloud & Solutions Architecture
    • International projects and vendors
    • Strong cloud team and mentorship
    More
  • Β· 70 views Β· 13 applications Β· 15d

    Senior Data Engineer to $6000

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Job Description Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions Proficiency in Python and SQL for building ingestion, transformation, and processing...

    Job Description

    • Solid experience with the Azure data ecosystem: Factory, Databricks or Fabric, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions
    • Proficiency in Python and SQL for building ingestion, transformation, and processing workflows
    • Clear understanding of Lakehouse architecture principles, Delta Lake patterns, and modern data warehousing
    • Practical experience building config-driven ETL/ELT pipelines, including API integrations and Change Data Capture (CDC)
    • Working knowledge of relational databases (MS SQL, PostgreSQL) and exposure to NoSQL concepts
    • Ability to design data models and schemas optimized for analytics and reporting workloads
    • Comfortable working with common data formats: JSON, Parquet, CSV
    • Experience with CI/CD automation for data workflows (GitHub Actions, Azure DevOps, or similar)
    • Familiarity with data governance practices: lineage tracking, access control, encryption
    • Strong problem-solving mindset with attention to detail
    • Clear written and verbal communication for async collaboration

       

    Nice-to-Have

    • Proficiency with Apache Spark using PySpark for large-scale data processing
    • Experience with Azure Service Bus/Event Hub for event-driven architectures
    • Familiarity with machine learning and AI integration within data platform context (RAG, vector search, Azure AI Search)
    • Data quality frameworks (Great Expectations, dbt tests)
    • Experience with Power BI semantic models and Row-Level Security

       

    Job Responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Azure Data Factory, Synapse, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for batch and API-based data ingestion
    • Build Medallion architecture layers (Bronze β†’ Silver β†’ Gold) ensuring efficient, reliable, and performant data processing
    • Ensure data governance, lineage, and compliance using Azure Key Vault and proper access controls
    • Collaborate with developers and business analysts to deliver trusted datasets for reporting, analytics, and AI/ML use cases
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Implement cross-system identity resolution (global IDs, customer/property keys across multiple platforms)
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

       

    Why TeamCraft?

    • Greenfield project - build architecture from scratch, no legacy debt
    • Direct impact - your pipelines power real AI products and business decisions
    • Small team, big ownership - no bureaucracy, fast iteration, your voice matters
    • Stable foundation - US-based company, 300+ employees
    • Growth trajectory - scaling with technology as the driver

     

    About the Project

    TeamCraft is a large U.S. commercial roofing company undergoing an ambitious AI transformation. We’re building a centralized data platform from scratch - a unified Azure Lakehouse that integrates multiple operational systems into a single source of truth (Bronze -> Silver -> Gold).

    This is greenfield development with real business outcomes - not legacy maintenance.

    More
  • Β· 60 views Β· 9 applications Β· 8d

    Senior Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· English - None
    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format...

    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format with the flexibility to collaborate across international teams.

    At Sigma Software, we deliver innovative IT solutions to global clients in multiple industries, and we take pride in projects that improve lives. Joining us means working with cutting-edge technologies, contributing to meaningful initiatives, and growing in a supportive environment.


    CUSTOMER
    Our client is a leading medical technology company. Its portfolio of products, services, and solutions is at the center of clinical decision-making and treatment pathways. Patient-centered innovation has always been, and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where patients live or what they face. The Customer is innovating sustainably to provide healthcare for everyone, everywhere. 


    PROJECT
    The project focuses on building and maintaining large-scale cloud-based data infrastructure for healthcare applications. It involves designing efficient data pipelines, creating self-service tools, and implementing microservices to simplify complex processes. The work will directly impact how healthcare providers access, process, and analyze critical medical data, ultimately improving patient care.

     

    Responsibilities:

    • Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
    • Build and maintain infrastructure using Terraform for cloud platforms
    • Design and implement large-scale cloud data infrastructure, self-service tooling, and microservices
    • Work with large datasets to optimize performance and ensure seamless data integration
    • Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
    • Discover, analyze, and organize disparate data sources into clean, understandable schemas

     

    Requirements:

    • Hands-on experience with cloud computing services in data and analytics
    • Experience with data modeling, reporting tools, data governance, and data warehousing
    • Proficiency in Python and PySpark for distributed data processing
    • Experience with Azure, Snowflake, and Databricks
    • Experience with Docker and Kubernetes
    • Knowledge of infrastructure as code (Terraform)
    • Advanced SQL skills and familiarity with big data databases such as Snowflake, Redshift, etc.
    • Experience with stream processing technologies such as Kafka, Spark Structured Streaming
    • At least an Upper-Intermediate level of English 

     

    More
  • Β· 60 views Β· 3 applications Β· 28d

    Middle Data Engineer IRC285068

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...

    Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    Requirements

    MUST HAVE

    AWS Platform: Working experience with AWS data technologies, including S3
    Programming Languages: Strong programming skills in Python
    Data Formats: Experience with JSON, XML and other relevant data formats
    HealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etc…

    Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

    CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
    Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…
    Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
    Documentation: Experience with markdown and in particular Antora for creating technical documentation

     

    NICE TO HAVE
    Strongly Preferred:
    Previous Healthcare or Medical Device experience
    Other data technologies such as Snowflake, Trino/Starburst
    Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
    FHIR and/or HL7 Certifications
    Building software classified as Software as a Medical Device (SaMD)
    Understanding of EHR technologies such as EPIC, Cerner, etc…
    Experience implementation enterprise grade cyber security & privacy by design into software products
    Experience working in Digital Health software
    Experience developing global applications
    Strong understanding of SDLC – Waterfall & Agile methodologies
    Software estimation
    Experience leading software development teams onshore and offshore

     

    Job responsibilities

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – API development using AWS services in a scalable, microservices based architecture

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meets the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.

    – Test the quality of a product and its ability to perform a task or solve a problems.

    – Perform basic maintenance and performance optimization procedures in each of the primary operating systems.

    – Ability to document detailed technical system specifications based on business system requirements

    – Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 35 views Β· 1 application Β· 5d

    Middle Data Engineer IRC285068

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - None
    Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...

    Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    Requirements

    MUST HAVE

    AWS Platform: Working experience with AWS data technologies, including S3
    Programming Languages: Strong programming skills in Python
    Data Formats: Experience with JSON, XML and other relevant data formats
    HealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etc…

    Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

    CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
    Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…
    Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
    Documentation: Experience with markdown and in particular Antora for creating technical documentation

     

    NICE TO HAVE
    Strongly Preferred:
    Previous Healthcare or Medical Device experience
    Other data technologies such as Snowflake, Trino/Starburst
    Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
    FHIR and/or HL7 Certifications
    Building software classified as Software as a Medical Device (SaMD)
    Understanding of EHR technologies such as EPIC, Cerner, etc…
    Experience implementation enterprise grade cyber security & privacy by design into software products
    Experience working in Digital Health software
    Experience developing global applications
    Strong understanding of SDLC – Waterfall & Agile methodologies
    Software estimation
    Experience leading software development teams onshore and offshore

     

    Job responsibilities

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – API development using AWS services in a scalable, microservices based architecture

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meets the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.

    – Test the quality of a product and its ability to perform a task or solve a problems.

    – Perform basic maintenance and performance optimization procedures in each of the primary operating systems.

    – Ability to document detailed technical system specifications based on business system requirements

    – Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 81 views Β· 3 applications Β· 28d

    Data Engineer IRC284644

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Description Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide. Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products,...

    Description

    Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide.

    Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products, educate customers, and personalize experiences.

    • Runs on Salesforce Commerce Cloud (formerly Demandware) β€” an enterprise e-commerce platform that supports online shopping, order processing, customer accounts, and product catalogs.
    • Hosted on cloud infrastructure (e.g., AWS, Cloudflare) for reliable performance and security
      Uses HTTPS/SSL encryption to secure data transfers.
    • Integrated marketing and analytics technologies such as Klaviyo (email & SMS automation), Google Tag Manager, and personalization tools to track behavior, optimize campaigns, and increase conversions

    It’s both a shopping platform and a digital touchpoint for customers worldwide.

     

    Requirements

    • 2+ years of experience as a Data Engineer, Analytics Engineer, or in a similar data-focused role.
    • Strong SQL skills for complex data transformations and analytics-ready datasets.
    • Hands-on experience with Python for data pipelines, automation, and data processing.
    • Experience working with cloud-based data platforms (AWS preferred).
    • Solid understanding of data warehousing concepts (fact/dimension modeling, star schemas).
    • Experience building and maintaining ETL/ELT pipelines from multiple data sources.
    • Familiarity with data quality, monitoring, and validation practices.
    • Experience handling customer, transactional, and behavioral data in a digital or e-commerce environment.
    • Ability to work with cross-functional stakeholders (Marketing, Product, Analytics, Engineering).

    Nice to have:

    • Experience with Snowflake, Redshift, or BigQuery.
    • Experience with dbt or similar data transformation frameworks.
    • Familiarity with Airflow or other orchestration tools.
    • Experience with marketing and CRM data (e.g. Klaviyo, GA4, attribution tools).
    • Exposure to A/B testing and experimentation data.
    • Understanding of privacy and compliance (GDPR, CCPA).
    • Experience in consumer, retail, or luxury brands.
    • Knowledge of event tracking and analytics instrumentation.
    • Ability to travel + visa to the USA

     

    Job responsibilities

    • Design, build, and maintain scalable data pipelines ingesting data from multiple sources:
      e-commerce platform (e.g. Salesforce Commerce Cloud), CRM/marketing tools (Klaviyo), web analytics, fulfillment and logistics systems.
    • Ensure reliable, near-real-time data ingestion for customer behavior, orders, inventory, and marketing performance.
    • Develop and optimize ETL/ELT workflows using cloud-native tools.
    • Model and maintain customer, order, product, and session-level datasets to support analytics and personalization use cases.
    • Enable 360Β° customer view by unifying data from website interactions, email/SMS campaigns, purchases, and returns.
    • Support data needs for personalization tools (e.g. product recommendation quizzes, ritual finders).
    • Build datasets that power marketing attribution, funnel analysis, cohort analysis, and LTV calculations.
    • Enable data access for growth, marketing, and CRM teams to optimize campaign targeting and personalization
    • Ensure accurate tracking and validation of events, conversions, and user journeys across channels.
    • Work closely with Product, E-commerce, Marketing, Operations, and Engineering teams to translate business needs into data solutions.
    • Support experimentation initiatives (A/B testing, new digital experiences, virtual stores).
    • Act as a data partner in decision-making for growth, CX, and operational efficiency.
    • Build and manage data solutions on cloud infrastructure (e.g. AWS).
    • Optimize storage and compute costs while maintaining performance and scalability.

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 19 views Β· 0 applications Β· 5d

    Senior Data Engineer IRC284644

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - None
    Description Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide. Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products,...

    Description

    Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide.

    Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products, educate customers, and personalize experiences.

    • Runs on Salesforce Commerce Cloud (formerly Demandware) β€” an enterprise e-commerce platform that supports online shopping, order processing, customer accounts, and product catalogs.
    • Hosted on cloud infrastructure (e.g., AWS, Cloudflare) for reliable performance and security
      Uses HTTPS/SSL encryption to secure data transfers.
    • Integrated marketing and analytics technologies such as Klaviyo (email & SMS automation), Google Tag Manager, and personalization tools to track behavior, optimize campaigns, and increase conversions

    It’s both a shopping platform and a digital touchpoint for customers worldwide.

     

    Requirements

    • 4+ years of experience as a Data Engineer, Analytics Engineer, or in a similar data-focused role.
    • Strong SQL skills for complex data transformations and analytics-ready datasets.
    • Hands-on experience with Python for data pipelines, automation, and data processing.
    • Experience working with cloud-based data platforms (AWS preferred).
    • Solid understanding of data warehousing concepts (fact/dimension modeling, star schemas).
    • Experience building and maintaining ETL/ELT pipelines from multiple data sources.
    • Familiarity with data quality, monitoring, and validation practices.
    • Experience handling customer, transactional, and behavioral data in a digital or e-commerce environment.
    • Ability to work with cross-functional stakeholders (Marketing, Product, Analytics, Engineering).

    Nice to have:

    • Experience with Snowflake, Redshift, or BigQuery.
    • Experience with dbt or similar data transformation frameworks.
    • Familiarity with Airflow or other orchestration tools.
    • Experience with marketing and CRM data (e.g. Klaviyo, GA4, attribution tools).
    • Exposure to A/B testing and experimentation data.
    • Understanding of privacy and compliance (GDPR, CCPA).
    • Experience in consumer, retail, or luxury brands.
    • Knowledge of event tracking and analytics instrumentation.
    • Ability to travel + visa to the USA

     

    Job responsibilities

    • Design, build, and maintain scalable data pipelines ingesting data from multiple sources:
      e-commerce platform (e.g. Salesforce Commerce Cloud), CRM/marketing tools (Klaviyo), web analytics, fulfillment and logistics systems.
    • Ensure reliable, near-real-time data ingestion for customer behavior, orders, inventory, and marketing performance.
    • Develop and optimize ETL/ELT workflows using cloud-native tools.
    • Model and maintain customer, order, product, and session-level datasets to support analytics and personalization use cases.
    • Enable 360Β° customer view by unifying data from website interactions, email/SMS campaigns, purchases, and returns.
    • Support data needs for personalization tools (e.g. product recommendation quizzes, ritual finders).
    • Build datasets that power marketing attribution, funnel analysis, cohort analysis, and LTV calculations.
    • Enable data access for growth, marketing, and CRM teams to optimize campaign targeting and personalization
    • Ensure accurate tracking and validation of events, conversions, and user journeys across channels.
    • Work closely with Product, E-commerce, Marketing, Operations, and Engineering teams to translate business needs into data solutions.
    • Support experimentation initiatives (A/B testing, new digital experiences, virtual stores).
    • Act as a data partner in decision-making for growth, CX, and operational efficiency.
    • Build and manage data solutions on cloud infrastructure (e.g. AWS).
    • Optimize storage and compute costs while maintaining performance and scalability.

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 21 views Β· 4 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    We’re seeking an experienced Senior Data Engineer to join a healthcare project and build, maintain, and evolve the data infrastructure that powers an AI-driven healthcare platform. The role focuses on designing robust data pipelines, managing a...

    We’re seeking an experienced Senior Data Engineer to join a healthcare project and build, maintain, and evolve the data infrastructure that powers an AI-driven healthcare platform.

    The role focuses on designing robust data pipelines, managing a centralized data lake architecture using AWS Lake Formation, and ensuring high-quality processing of both structured and unstructured healthcare data. You’ll work closely with data science, ML engineering, and backend teams to deliver scalable, secure, and compliant data solutions for aesthetic medicine applications.

     

    Responsibilities:

    • Design and implement scalable data pipelines for diverse healthcare data sources using AWS services;
      Build and maintain a centralized data lake using AWS Lake Formation for secure storage of structured and unstructured medical data;
    • Develop data ingestion, transformation, and processing workflows for multimodal healthcare data, including medical images, clinical documentation, and practice data;
    • Implement preprocessing pipelines for unstructured data using tools such as Bedrock Data Automation and LlamaIndex;
    • Build and maintain ETL/ELT processes with proper data governance and security controls;
    • Implement data quality monitoring systems and validation frameworks;
    • Support RAG system implementation with optimized data storage and retrieval mechanisms;
    • Develop and maintain data crawlers for collecting domain-specific medical content;
    • Ensure HIPAA compliance across all data handling and processing workflows;
    • Collaborate with data scientists and ML engineers to provide high-quality data for model training and AI features.
       

    Required Qualifications:

    • 4+ years of experience in data engineering roles;
    • Strong experience with AWS data services (S3, Glue, Lake Formation, Athena, EMR);
    • Proficiency in Python, SQL, and data processing frameworks;
    • Experience with data lakehouse architectures and ETL pipeline development;
    • Strong background in managing unstructured data pipelines and preprocessing workflows;
    • Experience with AWS analytics services (Glue Catalog, Glue ETL, Athena);
    • Knowledge of data quality frameworks (Great Expectations, Glue Data Quality);
    • Familiarity with vector databases and embedding generation for LLMs;
    • Understanding of data security and HIPAA compliance requirements;
    • Experience with data orchestration tools (Dagster, Airflow, AWS MWAA);
    • At least an Upper Intermediate level of spoken and written English.

       

    Preferred Qualifications:

    • Experience with Apache Iceberg table format for data lakehouse organization;
    • Experience using Dagster for data orchestration;
    • Hands-on experience with AWS CDK for Infrastructure as Code;
    • Background in preprocessing data for LLM applications (text extraction, semantic chunking);
    • Experience with real-time data streaming architectures;
    • Familiarity with healthcare data structures and medical terminology;
    • Experience with multi-account AWS data governance;
    • Background in healthcare data engineering or HIPAA-compliant systems.

       

    IT Craft offers:

    • Competitive compensation according to the qualifications;
    • Flexible working hours, remote work;
    • Opportunity for career growth;
    • The reward for sport activities;
    • In-house English training;
    • A friendly team of open-minded people.

    Please send your CV.

    By submitting your application, you consent to the processing of your personal data in accordance with IT Craft's Privacy Policy, available at https://itechcraft.com/datenschutz/.

    More
  • Β· 24 views Β· 0 applications Β· 1d

    Big Data Engineer to $8000

    Full Remote Β· Bulgaria, Poland, Romania Β· Product Β· 5 years of experience Β· English - B2
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: The product is an enterprise-grade digital experience...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

     

    About the Product:

    The product is an enterprise-grade digital experience platform that provides real-time visibility into system performance, application stability, and end-user experience across on-premises, virtual, and cloud environments. It ingests large volumes of telemetry from distributed agents on employee devices and infrastructure, processes and enriches data through streaming pipelines, detects anomalies, and stores analytical data for monitoring and reporting. The platform serves a global customer base with high throughput and strict requirements for security, correctness, and availability. Rapid adoption has driven significant year-over-year growth and demand from large, distributed teams seeking to secure and stabilize digital environments without added complexity.

     

    About the Role:

    This is a true Big Data engineering role focused on designing and building real-time data pipelines that operate at scale in production environments serving real customers. You will join a senior, cross-functional platform team responsible for the end-to-end data flow: ingestion, processing, enrichment, anomaly detection, and storage. You will own both architecture and delivery, collaborating with Product Managers to translate requirements into robust, scalable solutions and defining guardrails for data usage, cost control, and tenant isolation. The platform is evolving from distributed, product-specific flows to a centralized, multi-region, highly observable system designed for rapid growth, advanced analytics, and future AI-driven capabilities. Strong ownership, deep technical expertise, and a clean-code mindset are essential.

     

    Key Responsibilities:

    • Design, build, and maintain high-throughput, low-latency data pipelines handling large volumes of telemetry.
    • Develop real-time streaming solutions using Kafka and modern stream-processing frameworks (Flink, Spark, Beam, etc.).
    • Contribute to the architecture and evolution of a large-scale, distributed, multi-region data platform.
    • Ensure data reliability, fault tolerance, observability, and performance in production environments.
    • Collaborate with Product Managers to define requirements and translate them into scalable, safe technical solutions.
    • Define and enforce guardrails for data usage, cost optimization, and tenant isolation within a shared platform.
    • Participate actively in system monitoring, troubleshooting incidents, and optimizing pipeline performance.
    • Own end-to-end delivery: design, implementation, testing, deployment, and monitoring of data platform components.

     

    Required Competence and Skills:

    • 5+ years of hands-on experience in Big Data or large-scale data engineering roles.
    • Strong programming skills in Java or Python, with a willingness to adopt Java and frameworks like Vert.x or Spring.
    • Proven track record of building and operating production-grade data pipelines at scale.
    • Solid knowledge of streaming technologies such as Kafka, Kafka Streams, Flink, Spark, or Apache Beam.
    • Experience with cloud platforms (AWS, Azure, or GCP) and designing distributed, multi-region systems.
    • Deep understanding of production concerns: availability, data loss prevention, latency, and observability.
    • Hands-on experience with data stores such as ClickHouse, PostgreSQL, MySQL, Redis, or equivalents.
    • Strong system design skills, able to reason about trade-offs, scalability challenges, and cost efficiency.
    • Clean code mindset, solid OOP principles, and familiarity with design patterns.
    • Experience with AI-first development tools (e.g., GitHub Copilot, Cursor) is a plus.

     

    Nice to Have:

    • Experience designing and operating globally distributed, multi-region data platforms.
    • Background in real-time analytics, enrichment, or anomaly detection pipelines.
    • Exposure to cost-aware data architectures and usage guardrails.
    • Experience in platform or infrastructure teams serving multiple products.

     

    Why Us:

    - We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    - We provide full accounting and legal support in all countries we operate.

    - We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    - We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 20 views Β· 4 applications Β· 1d

    Senior Data Engineer (Data Competency Center)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire...

    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire innovation.

     

    At Sigma Software, we value expertise, continuous learning, and a supportive environment where your career path is shaped around your strengths. You’ll be part of a collaborative team, gain exposure to cutting-edge technologies, and work in an inclusive culture that fosters growth and innovation.

    Project

    Our Data Engineering Center of Excellence (CoE) is a specialized unit focused on designing, building, and optimizing data platforms, pipelines, and architectures. We work across diverse industries, leveraging modern data stacks to deliver scalable, secure, and cost-efficient solutions.

    Job Description

    • Collaborate with clients and internal teams to clarify technical requirements and expectations
    • Implement architectures using Azure or AWS cloud platforms
    • Design, develop, optimize, and maintain squad-specific data architectures and pipelines
    • Discover, analyze, and organize disparate data sources into clean, understandable data models
    • Evaluate new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans for analytical data engineering skills, standards, and processes

    Qualifications

    • 5+ years of experience with Python and SQL
    • Hands-on experience with AWS services (API Gateway, Kinesis, Athena, RDS, Aurora)
    • Proven experience building ETL pipelines for analytics/internal operations
    • Experience developing and integrating APIs
    • Solid understanding of Linux OS
    • Familiarity with distributed applications and DevOps tools
    • Strong troubleshooting/debugging skills
    • English level: Upper-Intermediate
    • WILL BE A PLUS:
    • 2+ years with Hadoop, Spark, or Airflow
    • Experience with DAGs/orchestration tools
    • Experience with Snowflake-based data warehouses
    • Experience developing event-driven data pipelines
    • Personal Profile

    PERSONAL PROFILE:

    • Passion for data processing and continuous learning
    • Strong problem-solving skills and analytical thinking
    • Ability to mentor and guide team members
    • Effective communication and collaboration skills
    More
  • Β· 15 views Β· 2 applications Β· 1d

    Senior Data Engineer

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones. Responsibilities: - Build and maintain scalable Zero ETL pipelines. -...

    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones.

    Responsibilities:
    - Build and maintain scalable Zero ETL pipelines.
    - Design and optimize data warehouses and data lakes on AWS (Glue, Firehose, Lambda, SageMaker).
    - Work with structured and unstructured data, ensuring quality and accuracy.
    - Optimize query performance and data processing workflows (Spark, SQL, Python).
    - Collaborate with engineers, analysts, and business stakeholders to deliver data-driven solutions.

    Requirements:
    - 5+ years of experience in Data Engineering.
    - Advanced proficiency in Spark, Python, SQL.
    - Expertise with AWS Glue, Firehose, Lambda, SageMaker.
    - Experience with ETL tools (dbt, Airflow etc.).
    - Background in B2C companies is preferred.
    - JavaScript and Data Science knowledge are a plus.
    - Degree in Computer Science (preferred, not mandatory).

    We offer:
    - remote time job, B2B contract
    - 12 sick leaves and 18 paid vacation business days per year
    - Comfortable work conditions (including MacBook Pro and Dell monitor on each workplace)
    - Smart environment
    - Interesting projects from renowned clients
    - Flexible work schedule
    - Competitive salary according to the qualifications
    - Guaranteed full workload during the term of the contract
     

    More
  • Β· 196 views Β· 61 applications Β· 10d

    Data Engineer

    Full Remote Β· Worldwide Β· 3 years of experience Β· English - B2
    The CHI Software team is not standing still. We love our job and give it one hundred percent of us! Every new project is a challenge that we face successfully. The only thing that can stop us is… Wait, it’s nothing! The number of projects is growing, and...

    The CHI Software team is not standing still. We love our job and give it one hundred percent of us! Every new project is a challenge that we face successfully. The only thing that can stop us is… Wait, it’s nothing! The number of projects is growing, and with them, our team too. And now we need a Data Engineer.
     

    Project Description:

    It is a real-time data processing and analytics solution for a high-traffic web application. 
    Tech stack.
    AWS: AWS Glue Studio, Redshift, RDS, Airflow, AWS Step Function, Lambda, AWS Kinesis, Athena, Apache Iceberg, AWS Data Brew, S3, OpenSearch, Python, SQL, CI/CD, dbt, Snowflake


     Responsibilities:

    • Design a scalable and robust AWS cloud architecture;
    • Utilize AWS Kinesis for real-time data streaming and aggregation;
    • Implemente AWS Lambda for serverless data processing, reducing operational costs;
    • Configured AWS RDS (Relational Database Service) for structured data storage and AWS DynamoDB for NoSQL requirements;
    • Ensured data security and compliance with AWS IAM (Identity and Access Management) and encryption services;
    • Developed and deployed data pipelines using AWS Glue for ETL processes;
    • Wrote Python scripts and SQL queries for data transformation and loading;
    • Set up continuous integration and continuous deployment (CI/CD) pipelines using AWS CodePipeline and CodeBuild;
    • Monitored system performance and data quality using AWS CloudWatch and custom logging solutions;
    • Collaborated with other teams to integrate data sources and optimize data flow;
    • Achieve a highly scalable real-time data processing system, resulting in a 40% increase in data analysis efficiency and a significant reduction in operational costs.
    • Build ETL pipelines from S3 to AWS OpenSearch by AWS Glue
    • Upper-Intermediate or higher English level.
    More
  • Β· 71 views Β· 14 applications Β· 30d

    Senior Data Engineer (Palantir Foundry)

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    PwC is a network of over 370,000 employees in 149 countries focused on providing the highest quality services in the areas of audit, tax advisory, consulting and technology development. What we offer: -Official employment; -Remote work...

    PwC is a network of over 370,000 employees in 149 countries focused on providing the highest quality services in the areas of audit, tax advisory, consulting and technology development.

     

    What we offer:

    -Official employment;

    -Remote work opportunity;

    -Annual performance and grade review;

    -A Dream team of experienced colleagues and high-class specialists;

    -Language courses (English & Polish languages);

    -Soft skills development;

    -Personal development plan and career coach;

    -Corporate events and team-buildings.

     

    Job Description:

    We are launching a strategically important Data Analytics project for government institutions using the Palantir Foundry platform. This project is critical for advancing Big Data and analytics in the government sector.

     

    Your Responsibilities:

    • Lead the implementation of solutions based on Palantir Foundry.
    • Work with large datasets from multiple sources and build analytical models.
    • Communicate with clients and internal teams (ability to explain technical solutions clearly).
    • Possible mentoring and training of internal specialists.
    • Participate in projects with government organizations.

     

    Requirements:

    • Hands-on experience with Palantir Foundry.
    • Strong technical expertise in Big Data, analytics, and data integration.
    • Experience with cloud data platforms (Azure/AWS/GCP).
    • Proficiency in Python, SQL, Spark, PySpark.
    • Optional: experience with ReactJS, TypeScript.
    • Soft skills: excellent communication skills and ability to work with clients.
    • English β€” Strong B2+.
    • Willingness to collaborate with international teams and government projects.

     

    We Offer:

    • Participation in a project that makes a real impact (government data analytics).
    • Remote work opportunity.
    • Work with cutting-edge technologies and professional growth in Big Data.

     

    Ready for the challenge? Send your resume and join the team that shapes the future!


    Privacy and personal data policy:
    https://www.pwc.com/ua/uk/about/privacy.html

    More
  • Β· 90 views Β· 7 applications Β· 20d

    Data Engineer β€” Azure Data Factory, Functions, Snowflake (Nature-based Solutions) to $5500

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    About the Client & Mission Our client is the world’s largest environmental nonprofit focused on reforestation and sustainable development (Nature-based Solutions). We are building a modern cloud data platform on Azure and Snowflake that will serve as a...

    About the Client & Mission

    Our client is the world’s largest environmental nonprofit focused on reforestation and sustainable development (Nature-based Solutions). We are building a modern cloud data platform on Azure and Snowflake that will serve as a single source of truth and enable faster, data-driven decision-making.

     

    About the Initiative

    This role supports a Data Warehouse initiative focused on tangible delivery impact: trusted data, clear and scalable models, and fast release cycles (1–3 months) with well-defined SLAs. You’ll work in a collaborative setup across Data Engineering ↔ BI ↔ Product, often handling 1–2 parallel workstreams with proactive risk and dependency management.

     

    Core Stack

    • ELT/DWH: Azure Data Factory + Azure Functions (Python) β†’ Snowflake
    • CI/CD: Azure DevOps pipelines + DL Sync (Snowflake objects and pipeline deployments)
    • Primary data sources: CRM/ERP (Dynamics 365, Salesforce), MS SQL, API-based ingestion, CDC concepts
    • Data formats: JSON, Parquet.

       

    Team (our side)

    Lead Data Engineering, PM, DevOps, QA.

     

    Your Responsibilities

    • Design, build, and maintain incremental and full-refresh ELT pipelines (ADF + Azure Functions β†’ Snowflake).
    • Develop and optimize Snowflake SQL for the DWH and data marts (Star Schema, incremental patterns, basic SCD2).
    • Build production-grade Python code in Azure Functions for ingestion, orchestration, and lightweight pre-processing.
    • Implement and maintain data quality controls (freshness, completeness, duplicates, late-arriving data).
    • Support CI/CD delivery for Snowflake objects and pipelines across dev/test/prod (Azure DevOps + DL Sync).
    • Contribute to documentation, best practices, and operational standards for the platform.
    • Communicate clearly and proactively: status β†’ risk β†’ options β†’ next step, ensuring predictable delivery.

       

    Requirements (Must-have)

    • 4+ years in Data Engineering or related roles.
    • Strong Snowflake SQL (CTEs, window functions, COPY INTO, MERGE).
    • Hands-on experience with incremental loading (watermarks, merge patterns) and basic SCD2 (effective dating / current flag).
    • Strong Python (production-ready code), including API integration (pagination, retries, error handling), logging, configuration, and secrets handling.
    • Solid experience with Azure Data Factory (pipelines, parameters, triggers) and Azure Functions (HTTP/Timer triggers, idempotency, retries).
    • Understanding of ELT/DWH modeling (Star Schema, fact/dimension design, performance implications of joins).
    • CI/CD familiarity: Azure DevOps and automated deployment practices for data platforms (DL Sync for Snowflake is a strong plus).
    • Strong communication skills and a proactive, accountable approach to teamwork.

       

    Nice to Have

    • PySpark (DataFrame API, joins, aggregations; general distributed processing understanding).
    • Experience with D365 / Salesforce, MS SQL sources, API-based ingestion, and CDC patterns.
    • Data governance/security basics, Agile/Scrum, and broader analytics tooling exposure.

       

    Selection Process (Transparent & Practical)

    Stage 1 β€” Intro + TA + Short Tech Screen (40–60 min, Zoom):

    • project context (multi-project setup, 1–3 month delivery cycles), must-haves for Azure/ELT, a short SQL/Python scenario;
    • soft skills & culture match discussion covering: Proactive communication & stakeholders, Critical thinking & judgment, Problem solving & systems thinking, Ownership & maturity.

       

    Stage 2 β€” Deep-Dive Technical Interview (75–90 min, with 2 engineers):
    Live SQL (CTE/window + incremental load/SCD2 approach), PySpark mini-exercises, Azure lakehouse architecture discussion, plus a mini-case based on a real delivery situation.
    No take-home task β€” we simulate day-to-day work during the session.

     

    What We Offer

    • Competitive compensation.
    • Learning and growth alongside strong leaders, deepening expertise in Snowflake/ Azure / DWH.
    • Opportunity to expand your expertise over time across diverse, mission-driven & AI projects.
    • Flexible work setup: remote / abroad / office (optional), gig contract (with an option to transition if needed).
    • Equipment and home-office support.
    • 36 paid days off per year: 20 vacation days + UA public holidays (and related days off, as applicable).
    • Monthly benefit of the cafeteria: $25 to support your personal needs (learning, mental health support, etc.).
    • Performance reviews: ongoing feedback, compensation review after 12 months, then annually.
    • Paid sabbatical after 5 years with the company.

       

    P.S. Dear fellow Ukrainians,
    we kindly ask you to apply for this role in a professional and well-reasoned manner, clearly highlighting the experience that is most relevant to the position.

    If you are unsure whether your background fully matches the requirements, please feel free to mention this openly in your application. This will not reduce your chances of being considered; it helps us review your profile fairly and prioritize candidates based on overall fit for the role.

    More
  • Β· 71 views Β· 6 applications Β· 30d

    Data Engineer ID47465

    Full Remote Β· Ukraine Β· 2 years of experience Β· English - B2
    Important: after confirming your application on this platform, you’ll receive an email with the next step: completing your application on our internal site, LaunchPod. So keep an eye on your inbox and don’t miss this step β€” without it, the process can’t...

    Important: after confirming your application on this platform, you’ll receive an email with the next step: completing your application on our internal site, LaunchPod. So keep an eye on your inbox and don’t miss this step β€” without it, the process can’t move forward.

     

    Why join us
    If you’re looking for a place to grow, make an impact, and work with people who care, we’d love to meet you! :)

     

    About the role
    As a Middle/Senior Data Engineer, you will play a pivotal role in evolving our patented CDIβ„’ Platform by transforming massive streams of live data into predictive insights that safeguard global supply chains. This role offers a unique opportunity to directly influence product vision by building models and streaming architectures that address real-world disruptions. You will work in an innovative environment where your expertise in Spark and Python drives meaningful growth and delivers critical intelligence to industry leaders.

     

    What you will do
    ● Become an expert on platform solutions and how they solve customer challenges within Supply Chain & related arenas;
    ● Identify, retrieve, manipulate, relate, and exploit multiple structured and unstructured data sets from thousands of various sources, including building or generating new data sets as appropriate;
    ● Create methods, models, and algorithms to understand the meaning of streaming live data and translate it into insightful predictive output for customer applications and data products;
    ● Educate internal teams on how data science and resulting predictions can be productized for key industry verticals;
    ● Keep up to date on competitive solutions, products, and services.

     

    Must haves
    ● 2+ years of experience in cloud-based data parsing and analysis, data manipulation and transformation, and visualization;
    ● Programming and scripting experience with Scala or Python;
    ● Experience with Apache Spark or similar frameworks;
    ● Experience with introductory SQL;
    ● Ability to explain technical and statistical findings to non-technical users and decision makers;
    ● Experience in technical consulting and conceptual solution design;
    ● Understanding of Hadoop and Apache-based tools to exploit massive data sets;
    ● Bachelor’s degree;
    ● Upper-intermediate English level.

     

    Nice to haves
    ● Experience with Java;
    ● Experience with Kafka or other streaming architecture frameworks;
    ● Domain knowledge in Supply Chain and/or transportation management and visibility technologies.


    Perks and benefits
    ● Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
    ● Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
    ● A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
    ● Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office β€” whatever makes you the happiest and most productive.
     

    Meet Our Recruitment Process Asynchronous stage β€” An automated, self-paced track that helps us move faster and give you quicker feedback:
    ● Short online form to confirm basic requirements
    ● 30–60 minute skills assessment
    ● 5-minute introduction video

    Synchronous stage β€” Live interviews
    ● Technical interview with our engineering team (scheduled at your convenience)
    ● Final interview with your future teammates

     

    If it’s a matchβ€”you’ll get an offer!

    More
Log In or Sign Up to see all posted jobs