Jobs Data Engineer

152
  • Β· 54 views Β· 2 applications Β· 1d

    Senior Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format...

    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format with the flexibility to collaborate across international teams.

    At Sigma Software, we deliver innovative IT solutions to global clients in multiple industries, and we take pride in projects that improve lives. Joining us means working with cutting-edge technologies, contributing to meaningful initiatives, and growing in a supportive environment.


    CUSTOMER
    Our client is a leading medical technology company. Its portfolio of products, services, and solutions is at the center of clinical decision-making and treatment pathways. Patient-centered innovation has always been, and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where patients live or what they face. The Customer is innovating sustainably to provide healthcare for everyone, everywhere. 


    PROJECT
    The project focuses on building and maintaining large-scale cloud-based data infrastructure for healthcare applications. It involves designing efficient data pipelines, creating self-service tools, and implementing microservices to simplify complex processes. The work will directly impact how healthcare providers access, process, and analyze critical medical data, ultimately improving patient care.

     

    Responsibilities:

    • Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
    • Build and maintain infrastructure using Terraform for cloud platforms
    • Design and implement large-scale cloud data infrastructure, self-service tooling, and microservices
    • Work with large datasets to optimize performance and ensure seamless data integration
    • Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
    • Discover, analyze, and organize disparate data sources into clean, understandable schemas

     

    Requirements:

    • Hands-on experience with cloud computing services in data and analytics
    • Experience with data modeling, reporting tools, data governance, and data warehousing
    • Proficiency in Python and PySpark for distributed data processing
    • Experience with Azure, Snowflake, and Databricks
    • Experience with Docker and Kubernetes
    • Knowledge of infrastructure as code (Terraform)
    • Advanced SQL skills and familiarity with big data databases such as Snowflake, Redshift, etc.
    • Experience with stream processing technologies such as Kafka, Spark Structured Streaming
    • At least an Upper-Intermediate level of English 
    More
  • Β· 30 views Β· 0 applications Β· 1d

    Middle Data Engineer

    Full Remote Β· Ukraine Β· 2 years of experience Β· English - B2
    Our is a Fortune 500 company. As a leading business-to-business organization, more than 3.2 million customers rely on its products in categories such as safety, material handling, and metalworking, along with services like inventory management and...

    Our is a Fortune 500 company. As a leading business-to-business organization, more than 3.2 million customers rely on its products in categories such as safety, material handling, and metalworking, along with services like inventory management and technical support.  

    We are seeking a skilled Data Engineer with 3-5 years of experience to join our growing data team. In this role, you will be instrumental in building and maintaining robust, scalable data platforms that process large volumes of diverse data.

    You won't just be writing SQL queries; you will be embracing the full Modern Data Stack. You will use Python and Airflow to orchestrate complex workflows, dbt to manage sophisticated transformations within our Lakehouse environments (Snowflake/Databricks), and Terraform to ensure our infrastructure is scalable and reproducible as code.

    This is an excellent opportunity for an engineer who wants to move beyond traditional ETL and work in a true DataOps environment, potentially bridging the gap between complex legacy sources (like SAP) and modern analytics.

    Key Responsibilities:

    Pipeline Development & Orchestration

    • Design, develop, and maintain reliable ETL/ELT pipelines using Python and SQL to ingest data from various sources into our data lake/warehouse.
    • Orchestrate complex data workflows and dependencies using Apache Airflow, ensuring timely data delivery and robust failure handling.

    Data Transformation & Modeling

    • Champion the use of dbt (data build tool) for developing, testing, and documenting data transformation logic within the warehouse.
    • Develop clean, highly optimized SQL models for reporting and analytics (data modeling concepts like Star Schema or Data Vault are a plus).

    Platform & Infrastructure Management

    • Work hands-on with both Snowflake and Databricks, optimizing compute resources, managing access controls, and ensuring high performance for end-users.
    • Utilize Terraform to provision and manage cloud infrastructure (e.g., S3 buckets, IAM roles, Snowflake warehouses) in an Infrastructure-as-Code paradigm.

    Data Quality & Reliability

    • Implement data quality checks and monitoring within pipelines to ensure the accuracy and integrity of our data.
    • Troubleshoot pipeline failures, identify performance bottlenecks, and implement long-term fixes.

    Requirements (Must-Haves):

    • Experience: 2+ years of professional experience in data engineering or backend software engineering with a data focus.
    • Programming: Strong proficiency in Python for data manipulation and scripting, and expert-level SQL skills for complex querying and performance tuning.
    • Modern Data Warehouse: Hands-on production experience with modern cloud data platforms, specifically Snowflake and Databricks. You should understand their architecture, compute models, and best practices.
    • Transformation: Proven experience using dbt in a production environment for transformation layers.
    • Orchestration: Experience building and managing complex DAGs in Apache Airflow.
    • Cloud platform experience: AWS
    • Infrastructure as Code: Working knowledge of Terraform for deploying and managing cloud resources.

    Preferred Qualifications (Nice-to-Haves):

    • SAP Exposure: Experience extracting data from SAP ECC or SAP S/4HANA systems. Understanding standard SAP tables and data structures is a significant plus.

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 32 views Β· 1 application Β· 1d

    Data Engineer

    Full Remote Β· EU Β· Product Β· 2 years of experience Β· English - B1
    PIN-UP Global is an international holding specializing in the development and implementation of advanced technologies, B2B solutions, and innovative products. We provide certification and licensing of our products, ensuring customers and partners of the...

    PIN-UP Global is an international holding specializing in the development and implementation of advanced technologies, B2B solutions, and innovative products. We provide certification and licensing of our products, ensuring customers and partners of the holding receive high-quality and reliable solutions.

    Requirements:
    - 2+ years of experience in the role of data engineer;

    - Experience in designing and implementing data processing systems;

    - Good knowledge of Python basics (including SOLID design principles);

    - HTTP;

    - Git (GitLab);
     

    Will be plus:

    - Columnstore DBs (Clickhouse);

    - Airflow;

    Responsibilities:
    - Communication with clients on the task requirements;

    - Write high performance, testable and maintainable code to implement new functionality;

    - Take part in technical discussions to come up with solutions for challenging issues;

    - Participate in code reviews to ensure code quality and distribute knowledge.


    Our benefits to you:
    πŸ”ΈAn exciting and challenging job in a fast-growing holding, the opportunity to be part of a multicultural team of top professionals in Development, Architecture, Management, Operations, Marketing, Legal, Finance and more
    Great working atmosphere with passionate experts and leaders, sharing a friendly culture and a success-driven mindset is guaranteed
    πŸ”ΈModern corporate equipment based on macOS or Windows and additional equipment are provided
    πŸ”ΈPaid vacations, sick leave, personal events days, days off
    πŸ”ΈReferral program β€” enjoy cooperation with your colleagues and get the bonus
    πŸ”ΈEducational programs: regular internal training sessions, compensation for external education, attendance of specialized global conferences
    Rewards program for mentoring and coaching colleagues
    πŸ”ΈFree internal English courses
    πŸ”ΈIn-house Travel Service
    πŸ”ΈMultiple internal activities: online platform for employees with quests, gamification, presents and news, PIN-UP clubs for movie / book / pets lovers and more
    πŸ”ΈOther benefits could be added based on your location
     

    More
  • Β· 54 views Β· 4 applications Β· 1d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· English - B1
    Your expertise: Strong analytical skills and a proven ability to extract actionable insights from data Experience with at least one SQL-based database or data warehouse solution (e.g., MySQL, PostgreSQL, MSSQL, Snowflake, BigQuery, or Apache Iceberg-based...

    Your expertise:

    • Strong analytical skills and a proven ability to extract actionable insights from data
    • Experience with at least one SQL-based database or data warehouse solution (e.g., MySQL, PostgreSQL, MSSQL, Snowflake, BigQuery, or Apache Iceberg-based systems)
    • Solid understanding of ETL processes and experience building them from scratch (experience with AWS Glue or Apache Airflow is a plus)
    • Proficiency in Python
    • Experience in gathering and analyzing system requirements
    • Excellent communication skills
    • Intermediate level of English (B1 or above)


    Will definitely be a plus:

    • Cloud experience, particularly with AWS
    • Familiarity with BI tools such as Tableau, Power BI, or similar

    What’s in it for you? 

    • Opportunity to deal with top-notch technologies and approaches in a world-leader product company with millions of customers
    • Opportunity to make a difference for online privacy, freedom of speech, and net neutrality
    • Decent market rate compensation depending on experience and skills
    • Developed corporate culture: no micromanagement, culture based on principles of truth, trust, and transparency
    • Support of personal and professional development
      • coverage of costs of external trainings, conferences, professional literature
      • support of experienced colleagues
      • in-house events and trainings
      • regular knowledge sharing in teams
      • English classes and speaking clubs
    • Life-balance support
      • 25 working days of vacation
      • 5 days of paid sick leave per year without providing a medical certificate (no limitation on sick leaves with medical confirmation)
      • generous maternity / paternity leave program
    • Professionally strong environment, friendly and open atmosphere, ability to influence the product development and recognition for it

    You will be involved into:

    • Create sustainable analytics solutions for internal clients
    • Learn new technologies 
    • Creating and implementing tracking documents to meet stated requirements for metadata management, operational data stores and ETL environments
    • Collaborate with internal customers and IT partners, including system architects, software developers, database administrators, design analysts and information modeling experts to determine project requirements and capabilities, and strategize development and implementation timelines
    • Research new technologies, data modeling methods and information management systems to determine which ones should be incorporated into company data architectures, and develop implementation timelines and milestones

    About the company and project:

    Namecheap was founded in 2000 on the idea that all people deserve value-priced domains delivered through stellar service. Today, Namecheap is a leading ICANN-accredited domain name registrar and web hosting company with over 16 million customers and 19 million domains under management β€” and we’re just getting started.

    Our culture is built on the values that we live every day: the way we work, the way we collaborate with our global network of colleagues and the way we relentlessly innovate solutions that meet the emerging needs of our customers.

    We are Business Intelligence team solving business challenges with innovative technology solutions. Experienced in setting up BI, ETL, DW and ML solutions. Currently we are working on a new project and we are looking for a Data Engineer, who will take part in a building solution from the scratch.

    More
  • Β· 43 views Β· 3 applications Β· 2d

    Data Engineers in the EU

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    Start: asap (by the 9th of March) Duration: long-term, by the end of 2026, with possible prolongation Language Requirement: Fluent in English (spoken and written) Requirements: We are seeking a Data Engineer – Professional 2 to contribute to the...

    Start: asap (by the 9th of March)
    Duration: long-term, by the end of 2026, with possible prolongation
    Language Requirement: Fluent in English (spoken and written)

    Requirements:
    We are seeking a Data Engineer – Professional 2 to contribute to the development and delivery of modern data products in close collaboration with domain teams and cross-functional stakeholders.
    This role is focused on hands-on implementation within a managed service setup. The consultant will support the building, governing, and operating of data products across the lifecycle, from ingestion to serving, while working under defined standards and guidance from senior team members.

    Engagement Model
    This is a managed service engagement.
    We are procuring a cohesive delivery package from one supplier, not individual consultants from multiple vendors.
    The supplier is accountable for service delivery, competence coverage, continuity, and performance at the service level.
    Suppliers responding to this request must also respond to the two associated requests. Partial submissions will not be considered.
    The assignment follows an output-driven delivery model.
    Delivery offered and accepted per Program Increment (10-week setup cycle).
    All consultants must be physically located within the EU due to regulatory data-sharing requirements.


    Scope of Work
    Build Data Products
     

    • Source-aligned data products with domain teams in cross-functional teams.
    • Analytical data products for BI, ML, and AI use cases in cross-functional teams.



    Deliverables:
    Source-aligned and analytical data products, including defined data contracts, documentation, SLAs, and runbooks.
    Work According to Standards and Reusable Patterns
     

    • Work according to and provide feedback on data contracts, modeling patterns, testing strategies, CI/CD pipelines, and golden paths.
    • Contribute to reusable templates and shared libraries.



    Deliverables:
    Deliveries aligned with defined best practices.
    Structured feedback and improvement proposals when standards or patterns are not functioning as intended.

    Enable Domain Teams

    • Work in cross-functional teams to bootstrap ownership, standards, and tooling.
    • Support transition of ownership to domain teams when feasible.
    • Perform pair-programming and structured collaboration.



    Deliverables:
    Pair-programming sessions and active cooperation with domain teams.
    Structured knowledge transfer to enable sustainable ownership.
    Professional Operations for Owned Products
     

    • Ensure production-grade lifecycle management from ingestion to serving.
    • Cover data quality, governance, lineage, SLA adherence, cost control, and observability.



    Deliverables:
    Monitoring of metadata, lineage, health, cost, and usage of delivered data products.

    Out of Scope
    The managed service team will not:

    • Operate the Data Mesh platform, IAM, or observability stack. Responsibility remains with IXA.
    • Own or operate operational systems of record. Responsibility remains with domain teams.
    • Build dashboards or develop ML models. Responsibility remains with KDBDA/B.



    Required Technical Competence

    • 1–2 years of experience in Data Engineering.
    • Minimum 1–2 previous assignments.
    • Mandatory proven delivery of pipelines and data products on:

    x AWS
    x Snowflake
    x dbt
    x Azure
    x Witboost
    x Dreamio
     

    • Proficiency in dbt and modern ELT/ETL practices.
    • Experience with CI/CD and automated testing in data environments.
    • Familiarity with Data Mesh environments.
    • Familiarity with DataOps principles in production environments.
    • Basic understanding of data governance, data contracts, and modeling patterns.



    Personal Competencies

    • Strong willingness to learn and develop within a managed service context.
    • Ability to operate in an output-driven delivery model.
    • High technical credibility and structured problem-solving ability.
    • Strong collaboration skills in cross-functional and distributed teams.
    • Proactive, quality-oriented, and improvement-driven.



    Additional Information

    • As an approved exception for this specific managed service request, the standard CV template requirement does not apply. Suppliers may submit their own CV format for this assignment.
    • All consultants must be located within the EU due to regulatory data-sharing requirements.
    • All interviews will be conducted remotely.
    More
  • Β· 923 views Β· 74 applications Β· 2d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· English - B2
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 61 views Β· 6 applications Β· 2d

    Data Engineer

    Full Remote Β· Worldwide Β· Product Β· 3 years of experience Β· English - B2
    BigD are seeking a proactive and motivated Data Engineer to join our vibrant team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, building our own DWH, integrating with API, and other data...

    BigD are seeking a proactive and motivated Data Engineer to join our vibrant team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, building our own DWH, integrating with API, and other data points. We are interested in a full-time position.
     

    We invite those who are fired up to

    β€” Build our own DWH based on PostgreSQL

    β€” Work with big enough datasets, which must be updated at least hourly or daily frequency

    β€” DWH components: AWS (S3, Athena), PostgreSQL, BigQuery

    β€” Data collection: Kafka, Google Analytics, API, and other 3rd-party apps

    β€” Data quality/integrity testing automation

    β€” Develop and support ETL / ELT processes

    β€” Support for saving ML models from the other teams

    β€” Create and support project documentation
     

    Essential professional experience

    β€” 2+ years of professional experience as a Data Engineer position

    β€” Python development experience with demonstrated ability to write clean, maintainable, and high-performance code

    β€” Advanced SQL query writing and optimization skills, with strong knowledge of PostgreSQL or similar relational databases

    β€” Hands-on experience with designing and implementing RESTful API’s (Aiohttp, Flask, FastAPI)

    β€” Hands-on experience with relational databases (PostgreSQL, Microsoft SQL Server, MySQL) β€” Proficiency in containerization technologies (e.g., Docker, Kubernetes)

    β€” Basic knowledge of technologies like DBT, Databricks, Snowflake, Kubernetes, and Airflow

    β€” Strong foundation in object-oriented design, data structures, algorithms, and computational complexity analysis
     

    We also appreciate

    β€” Self-motivated with a strong ownership mentality and ability to deliver results independently

    β€” Excellent problem-solving skills and attention to detail

    β€” Strong communication skills for cross-functional collaboration

    Working conditions:

    • Direct communication with the core TEAM
    • 28 calendar days of vacation
    • Paid sick leave
    • Sports compensation
    • Compensation for courses and training
    • Day off for birthday
    • Flexible work schedule
    • Regular salary reviews
    • Salary paid at a favorable rate
    • Non-toxic work environment, free of bureaucracy
    • Stable salary payment

       

    Join a fast-growing team at the forefront of the iGaming industry, where your expertise will directly contribute to the company's growth and success.

    More
  • Β· 49 views Β· 6 applications Β· 2d

    Data Engineer

    Full Remote Β· Ukraine Β· 1 year of experience Β· English - B2
    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for...

    N-iX is a global software development service company that helps businesses across the globe create next-generation software products. Founded in 2002, we unite 2,400+ tech-savvy professionals across 40+ countries, working on impactful projects for industry leaders and Fortune 500 companies. Our expertise spans cloud, data, AI/ML, embedded software, IoT, and more, driving digital transformation across finance, manufacturing, telecom, healthcare, and other industries. Join N-iX and become part of a team where your ideas make a real impact.

     

    This role is ideal for someone at the beginning of their data engineering career who wants to grow in a supportive environment. We value curiosity, a learning mindset, and the ability to ask good questions. If you’re motivated to develop your skills and become a strong Data Engineer over time, we’d be happy to help you grow with us πŸš€



    Responsibilities

    • Support the implementation of business logic in the Data Warehouse under the guidance of senior engineers
    • Assist in translating business requirements into basic data models and transformations
    • Help develop, maintain, and monitor ETL pipelines using Azure Data Factory
    • Participate in data loading, validation, and basic query performance optimization
    • Work closely with senior team members and customer stakeholders to understand requirements and data flows
    • Contribute to documentation and follow best practices in data engineering and development
    • Gradually propose improvements and ideas as experience grows

       

    Requirements

    • Up to 1,5 years of experience in Data Engineering
    • Basic hands-on experience with SQL and strong willingness to work with it as a core skill
    • Familiarity with Microsoft Azure or strong motivation to learn Azure-based data solutions
    • Understanding of relational databases and fundamentals of data modeling
    • Ability to write clear and maintainable SQL queries
    • Basic experience with version control systems (e.g. Git)
    • Interest in data warehousing and analytical systems
    • Familiarity with Agile ways of working (through coursework, internships, or first commercial experience)
    • Strong analytical thinking and eagerness to learn from more experienced colleagues

       

    Nice to Have

    • Exposure to Azure Data Factory, dbt, or similar ETL tools
    • Basic knowledge of Databricks
    • Understanding of Supply Chain & Logistics concepts
    • Any experience working with SAP data (MM or related modules)
       
    More
  • Β· 37 views Β· 4 applications Β· 2d

    Data Engineer (Azure)

    Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    Since 2001, FutureLog has been creating and developing digital procure-to-pay solutions for the hospitality and gastronomy industries. We are a world-class procurement management platform serving over 10,000 clients across 20+ countries, delivering near...

    Since 2001, FutureLog has been creating and developing digital procure-to-pay solutions for the hospitality and gastronomy industries.  We are a world-class procurement management platform serving over 10,000 clients across 20+ countries, delivering near real-time reporting with complex data models and advanced permission structures. 

     

    We are looking for a highly skilled Data Engineer (Azure) to design and build the next generation of our high-load data platform. 

     

    The primary goal of this role is to architect and implement a modern, scalable data pipeline based on Microsoft Fabric and Azure technologies (or other relevant solutions) to significantly enhance the performance, scalability, and speed of our current Power BI environment, which is built on MS SQL Managed Instance and SQL-based queries

     

    This is a high-impact engineering role focused on platform modernization and long-term data architecture evolution. 

    Key Responsibilities 

    • Design and implement next-generation high-load data pipelines using Microsoft Fabric and Azure services 
    • Optimize and modernize the current MS SQL Managed Instance-based architecture 
    • Improve Power BI performance, scalability, and near real-time capabilities 
    • Design and implement scalable Lakehouse / Data Warehouse architectures 
    • Build and optimize ETL / ELT processes 
    • Ensure data quality, reliability, monitoring, and governance 
    • Design data models that support complex reporting and permission structures 
    • Collaborate closely with BI engineers, product managers, and infrastructure teams 
    • Contribute to architectural decisions and technical roadmap planning 
    • Support migration from legacy SQL-based architecture to modern Azure-based solutions 


      Must-Have Skills & Experience  
    • Strong hands-on experience with Azure data services including Microsoft Fabric, Azure Data Factory, and Event Hub 
    • Deep expertise in designing and optimizing high-load and distributed data pipelines, including Apache Spark processing 
    • Strong experience with MS SQL and PostgreSQL, including performance tuning and query optimization 
    • Experience designing modern Lakehouse, Data Warehouse, and hybrid data architectures using Fabric or similar platforms 
    • Hands-on experience with ETL / ELT development using orchestration and transformation tools such as dbt; experience with Apache Airflow is highly desirable 
    • Strong practical knowledge of CDC implementation using tools such as Debezium and integration with streaming or near real-time ingestion pipelines 
    • Solid understanding of Power BI architecture, including performance optimization, Gateway configuration, embedding scenarios, and Row-Level Security (custom and Active Directory-based) 
    • Experience working with large-scale datasets and complex access control and permission models 
    • Experience deploying and managing containerized data workloads using Docker and Kubernetes in cloud-native environments 
    • Strong analytical thinking, data modeling, and system design capabilities 
    • Fluent English (written and verbal communication) 

       

      Nice to Have 

    • Experience migrating legacy SQL environments to modern Azure-based platforms 
    • Experience with near real-time or streaming data processing 
    • Knowledge of CI/CD for data engineering workflows 
    • Experience working in high-scale SaaS platforms 

      What We Offer 
    • Opportunity to build and shape a next-generation Azure data platform 
    • High ownership and architectural impact 
    • Work on a global system serving thousands of enterprise customers 
    • Collaboration with strong BI, product, and engineering teams 
    • A dynamic and international working environment 
      20 working days of paid holidays. 
    • Paid sick leaves. 
    • All necessary equipment for work. 
    • Flexible working hours. 
    • 300 EUR Wellness compensation annually (can be used for health insurance, education, gym, sports activities, etc.). 

       

    More
  • Β· 34 views Β· 4 applications Β· 2d

    PrincipalΒ Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    !!! Location: remotely from Latvia, Lithuania, lower priority - Spain/Portugal/Romania/Slovakia/Hungary/Czech Republic!!! In a partnership with one of global consulting companies, we’re looking for a Principal Data Engineer (Product Data Domains). The...

    !!! Location: remotely from  Latvia, Lithuania, lower priority - Spain/Portugal/Romania/Slovakia/Hungary/Czech Republic!!!

    In a partnership with one of global consulting companies, we’re looking for a Principal Data Engineer (Product Data Domains). The client is a UK-based telecommunication company.

     

    Product Group is responsible for the design, development, and delivery of the company’s portfolio of digital products.  Our portfolio is diverse and contains some of the largest and highest-profile properties on the UK internet. We’re a huge streaming media destination, a news source trusted across the world, a provider of educational and entertaining content to children of all ages, and a sports results, analysis, and commentary service, and much more besides. It's an unparalleled portfolio of products, and our strength is our range and breadth. Working with the company’s content divisions, our focus now is on driving engagement across our portfolio so that the our service becomes a valued daily habit for all audiences just as television and radio have been over the last century. 

    Data is fundamental to our future: both in helping us prioritise and shape our work and in creating richer, more personalised experiences for our audiences. And our portfolio means that we’ve got one of the widest, most diverse, and most exciting datasets to work within the UK. 

     

    The Role 

     

    The Principal Data Engineer will support the Product Data Domain teams. You will build ETL pipelines to ingest and transform data to develop the data products that will power key value use cases across the company. You will work in an agile multi-disciplinary team alongside product analytics developers, product data managers, data modelers and data operations managers, ensuring that all work delivers maximum value to the company. 

     

    Role and Responsibilities 

    Role and responsibilities will comprise of: 

     

    • Leads and architects on developing robust and scalable complex data pipelines to ingest, transform, and analyse large volumes of structured and unstructured data from diverse data sources. Pipelines must be optimised for performance, reliability, and scalability in line with the client scale.  
    • Lead initiatives to enhance data quality, governance and security across the organisation, ensuring compliance with client guidelines and industry best practices. 
    • Prioritises stakeholders requirements and identify the best solution for timely delivery. 
    • Leads on building automation workflows including monitoring and alerting. 
    • Encouraging and mentoring team members in partnership with other disciplines to create value with data across the wider organisation.  
    • Helps set standards for coding, testing and other engineering practices. 
    • Leads on the building and testing of business continuity & disaster recovery procedures per requirements. 
    • Proactively evaluates and provides feedback on future technologies and new releases/upgrades based on deep understanding of the domain. 

     

    Are you the right candidate? 

    When it comes to data engineering, we look for the following skills. 

     

    Technical skills 

    • Extensive (5+ years) experience in a data engineering or analytics engineering role, preferably in digital products. 
    • Extensive experience in building ETL pipelines, ingesting data from a diverse set of data sources (including event streams, various forms of batch processing) 
    • Excellent SQL and python skills. 
    • Extensive use of AWS  
    • Good working knowledge of Data Warehousing technologies (such as AWS Redshift, GCP BigQuery or Snowflake). 
    • Experience in deploying and scheduling code bases in a data development environment, using technologies such as Airflow.  
    • Demonstrable experience of working alongside cross-functional teams interacting with Product Managers, Infrastructure Engineers, Data Scientists, and Data Analysts. 

     

    Teamwork and stakeholder management   

    • Ability to listen to others’ ideas and build on them   
    • Ability to clearly communicate to both technical and non-technical audiences.  
    • Ability to collaborate effectively, working alongside other team members towards the team’s goals, and enabling others to succeed, where possible.  
    • Ability to prioritise. A structured approach and ability to bring other on the journey.   
    • Strong attention to detail 
    More
  • Β· 22 views Β· 2 applications Β· 2d

    System Engineer (Administrator)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B1
    Cosmolot is a Ukrainian product IT company that creates and develops an online digital entertainment platform. We build technology-driven solutions with a strong focus on security, transparency, and user comfort. Responsibilities: Implement and configure...

    Cosmolot is a Ukrainian product IT company that creates and develops an online digital entertainment platform. We build technology-driven solutions with a strong focus on security, transparency, and user comfort.

    Responsibilities:

    • Implement and configure org domain with device enrollment into
    • Own and scale part of IT infrastructure and network architecture (WiFi, LAN/WAN, VPN, DNS)
    • Configuring and maintaining the IT infrastructure (system software, applications)
    • Collaborate with cross-functional terms to remediate identified vulnerabilities.
    • Managing system permissions and user accounts
    • Participating in IT/Information Security projects aimed at improving user experience and implementing new technologies
    • Solve dedicated l2/l3 user request


    Requirements and Skills

    • Networking knowledge and hands-on experience managing corporate networks (protocols, VPN, DNS, WiFi, TCP/IP)
    • Experience with remote access systems
    • Strong Windows administration skills, with confident support of user and server environments
    • Experience in configuration, and troubleshooting of macOS
    • Ability to script or use low-code/no-code tools to automate IT workflows and reduce manual operations.
    • Documentation culture: you enjoy creating clean, structured, and useful docs.
    • Ability to dive into details and study new knowledge
    • Intermediate level of english for documentation and interfaces 
    • 3+ years experience  System Engineer / Administration / IT Operations (or similar roles)


    Would be a plus

    • Experience with AWS / Google cloud
    • Experience with log collection, parsing and enrichment with Elastic, logstash, opensearch
    • Experience with Google workspace and Atlassian (Jira, Confluence) products
    • Experience with SSO protocols and service-to-directory integrations
    • Security mindset and experience with MDM and IAM configuration (Google Workspace).


    What We Offer

    • Work in a strong team of professionals and genuinely supportive people
    • Opportunities for professional growth and development
    • Flexible working hours
    • Medical insurance
    • Sports expense compensation
    • Psychological support
    • Free language classes (English, Spanish, and German)
    • 18 working days of paid vacation, paid sick leave, and sick days
    • Regular team buildings and corporate events
    • All necessary equipment for comfortable work
       
    More
  • Β· 39 views Β· 6 applications Β· 3d

    Senior Data Engineer IRC289732

    Full Remote Β· Ukraine, Poland, Romania, Croatia, Slovakia Β· 4 years of experience Β· English - B2
    The client is a luxury skincare and beauty group. The client is based in San Francisco and sells luxury skincare products worldwide. Its main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products, educate...

    The client is a luxury skincare and beauty group. The client is based in San Francisco and sells luxury skincare products worldwide.

    Its main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products, educate customers, and personalize experiences.

    • Runs on Salesforce Commerce Cloud (formerly Demandware) β€” an enterprise e-commerce platform that supports online shopping, order processing, customer accounts, and product catalogs.
    • Hosted on cloud infrastructure (e.g., AWS, Cloudflare) for reliable performance and security
      Uses HTTPS/SSL encryption to secure data transfers.
    • Integrated marketing and analytics technologies such as Klaviyo (email & SMS automation), Google Tag Manager, and personalization tools to track behavior, optimize campaigns, and increase conversions

    It’s both a shopping platform and a digital touchpoint for customers worldwide.

     

    Requirements

    • 4+ years of experience as a Data Engineer, Analytics Engineer, or in a similar data-focused role.
    • Strong SQL skills for complex data transformations and analytics-ready datasets.
    • Hands-on experience with Python for data pipelines, automation, and data processing.
    • Experience working with cloud-based data platforms (AWS preferred).
    • Solid understanding of data warehousing concepts (fact/dimension modeling, star schemas).
    • Experience building and maintaining ETL/ELT pipelines from multiple data sources.
    • Familiarity with data quality, monitoring, and validation practices.
    • Experience handling customer, transactional, and behavioral data in a digital or e-commerce environment.
    • Ability to work with cross-functional stakeholders (Marketing, Product, Analytics, Engineering).

     

    Nice to have:

    • Experience with Snowflake, Redshift, or BigQuery.
    • Experience with dbt or similar data transformation frameworks.
    • Familiarity with Airflow or other orchestration tools.
    • Experience with marketing and CRM data (e.g. Klaviyo, GA4, attribution tools).
    • Exposure to A/B testing and experimentation data.
    • Understanding of privacy and compliance (GDPR, CCPA).
    • Experience in consumer, retail, or luxury brands.
    • Knowledge of event tracking and analytics instrumentation.
    • Ability to travel + visa to the USA

     

    Job responsibilities

    • Design, build, and maintain scalable data pipelines ingesting data from multiple sources:
      e-commerce platform (e.g. Salesforce Commerce Cloud), CRM/marketing tools (Klaviyo), web analytics, fulfillment and logistics systems.
    • Ensure reliable, near-real-time data ingestion for customer behavior, orders, inventory, and marketing performance.
    • Develop and optimize ETL/ELT workflows using cloud-native tools.
    • Model and maintain customer, order, product, and session-level datasets to support analytics and personalization use cases.
    • Enable 360Β° customer view by unifying data from website interactions, email/SMS campaigns, purchases, and returns.
    • Support data needs for personalization tools (e.g. product recommendation quizzes, ritual finders).
    • Build datasets that power marketing attribution, funnel analysis, cohort analysis, and LTV calculations.
    • Enable data access for growth, marketing, and CRM teams to optimize campaign targeting and personalization
    • Ensure accurate tracking and validation of events, conversions, and user journeys across channels.
    • Work closely with Product, E-commerce, Marketing, Operations, and Engineering teams to translate business needs into data solutions.
    • Support experimentation initiatives (A/B testing, new digital experiences, virtual stores).
    • Act as a data partner in decision-making for growth, CX, and operational efficiency.
    • Build and manage data solutions on cloud infrastructure (e.g. AWS).
    • Optimize storage and compute costs while maintaining performance and scalability.
    More
  • Β· 33 views Β· 3 applications Β· 3d

    Principal Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    The Role The Principal Data Engineer will support the Product Data Domain teams. You will build ETL pipelines to ingest and transform data to develop the data products that will power key value use cases across the company. You will work in an agile...

    The Role 

    The Principal Data Engineer will support the Product Data Domain teams. You will build ETL pipelines to ingest and transform data to develop the data products that will power key value use cases across the company. You will work in an agile multi-disciplinary team alongside product analytics developers, product data managers, data modelers and data operations managers, ensuring that all work delivers maximum value to the company. 

    Role and Responsibilities 

    • Leads and architects on developing robust and scalable complex data pipelines to ingest, transform, and analyse large volumes of structured and unstructured data from diverse data sources. Pipelines must be optimised for performance, reliability, and scalability in line with the client scale.  
    • Lead initiatives to enhance data quality, governance and security across the organisation, ensuring compliance with client guidelines and industry best practices. 
    • Prioritises stakeholders requirements and identify the best solution for timely delivery. 
    • Leads on building automation workflows including monitoring and alerting. 
    • Encouraging and mentoring team members in partnership with other disciplines to create value with data across the wider organisation.  
    • Helps set standards for coding, testing and other engineering practices. 
    • Leads on the building and testing of business continuity & disaster recovery procedures per requirements. 
    • Proactively evaluates and provides feedback on future technologies and new releases/upgrades based on deep understanding of the domain. 

       

      Are you the right candidate? 

      When it comes to data engineering, we look for the following skills. 

     

    Technical skills 

    • Extensive (5+ years) experience in a data engineering or analytics engineering role, preferably in digital products. 
    • Extensive experience in building ETL pipelines, ingesting data from a diverse set of data sources (including event streams, various forms of batch processing) 
    • Excellent SQL and python skills. 
    • Extensive use of AWS  
    • Good working knowledge of Data Warehousing technologies (such as AWS Redshift, GCP BigQuery or Snowflake). 
    • Experience in deploying and scheduling code bases in a data development environment, using technologies such as Airflow.  
    • Demonstrable experience of working alongside cross-functional teams interacting with Product Managers, Infrastructure Engineers, Data Scientists, and Data Analysts. 

     

     

    More
  • Β· 557 views Β· 53 applications Β· 3d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· English - B2
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 70 views Β· 2 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· English - B2
    The Role We are looking for a Senior Data Engineer to join the Identity team at Equals5. This is not a standard ETL role. We are building a dynamic data ecosystem where AI is deeply integratedβ€”both as a productivity multiplier and as a core component of...

    The Role

    We are looking for a Senior Data Engineer to join the Identity team at Equals5. This is not a standard ETL role. We are building a dynamic data ecosystem where AI is deeply integratedβ€”both as a productivity multiplier and as a core component of our data processing logic for identity data enrichment and data scoring.

    You will own the infrastructure that handles over 10,000 executions per minute, ensuring stability, scalability, and data integrity. You will work with a modern stack on Google Cloud Platform, utilizing Cloud Functions and Kubernetes.

    We are looking for an engineer who improves infrastructure, automates everything, and is eager to implement LLM-based logic directly into high-load data flows.
     

    Responsibilities

    • AI-Driven Data Scoring: Design and implement pipelines that utilize LLMs to analyze and score identity data in real-time. You will integrate AI models directly into the decision-making loop, balancing accuracy with latency and cost.
    • Own the Data Architecture: Architect scalable data solutions using GCP and Python. You will manage data storage and retrieval using BigQuery and Apache Iceberg to support querying of TBs of data.
    • Heavy Data Processing: Utilize Apache Spark for data transformations and batch processing when lightweight cloud functions are not enough.
    • Manage High-Load Orchestration: Maintain and optimize our system instances. It involves complex dataflows, custom Python nodes, and performance tuning for 10k+ execs/minute.
    • Release Lifecycle (CI/CD): Take ownership of the deployment process, ensuring that updates to pipelines and infrastructure are released safely with proper testing and rollback strategies.
    • Database Optimization: Manage PostgreSQL performance under heavy load, optimizing complex queries and indexing strategies.
    • Active AI Usage: Use Claude Code and other AI engineering tools to accelerate your own development, refactoring, and testing processes.
    • Incident Resolution: Proactively monitor the system. When alerts fire, you investigate the root causeβ€”whether it’s a database lock or an LLM hallucinationβ€”and fix it permanently.
       

    Requirements

    • 4-5+ years of experience in Data Engineering or Backend Engineering with a strong data focus.
    • Production AI Integration: Experience integrating LLMs (OpenAI, Anthropic, Gemini) into production applications via API. You understand latency, token limits, and how to structure data for AI scoring.
    • Expertise in GCP: Understanding of Google Cloud Platform (Cloud Functions, IAM, Networking).
    • Strong Python: You write clean, efficient, and testable code. You are comfortable building custom logic where standard tools fall short.
    • Big Data Stack: Experience with BigQuery, Apache Spark, and modern table formats like Apache Iceberg.
    • Kubernetes (K8s): Experience deploying and scaling services in containerized environments.
    • Workflow Automation: Understanding of workflow orchestration tools at a deep technical level. N8N is a big part of our domain, so familiarity with it is highly valuable.
    • PostgreSQL Mastery: Proven ability to handle heavy write/read loads and optimize schemas.
    • English: B2+ (Upper-Intermediate) or higher.
       

    Culture & Mindset

    • Self-improvement: You are a fast learner. You don’t fear AI replacing you; you master it to replace your manual tasks.
    • Ownership: You treat the Identity domain as your own business. If a scoring model drifts or a pipeline slows down, you notice it and fix it without being asked.
    • Internal Locus of Control: You take responsibility for outcomes. If an external API fails, you build a fallback mechanism instead of just blaming the provider.
    • Get It Done: You prioritize shipping value. You know when to use a simple script and when to build a complex architecture.
    • Openness: You share knowledge freely. If you find a better way to prompt the AI for scoring, you share it with the team.
       

    What We Offer

    • Fully remote with flexible hours (aligned with EU timezones for syncs).
    • AI-Native Environment: We provide licenses for Claude Code and encourage using the bleeding edge of AI tech for both daily coding and product features.
    • High-Impact Role: You will directly influence how we identify and score users, impacting the core business logic.
    • Cross-functional visibility: Work closely with Product and Tech Leads to shape the future of Identity.
    • No Bureaucracy: Fast decisions, no legacy processes, focus on results.
    More
Log In or Sign Up to see all posted jobs