Jobs Data Engineer

161
  • Β· 38 views Β· 3 applications Β· 12d

    Senior Data Engineer

    Full Remote Β· Poland, Romania, Ukraine Β· 6 years of experience Β· English - B2
    Transcenda is a global provider of design and engineering services. We put people first and strive to be agents of change by building a better future through technology. We are dedicated to empowering organizations to rapidly scale, digitally transform,...

    Transcenda is a global provider of design and engineering services. We put people first and strive to be agents of change by building a better future through technology. We are dedicated to empowering organizations to rapidly scale, digitally transform, and bring new products to market.

    Recognized by Newsweek as one of America’s greatest workplaces of 2025, Transcenda is home to 200+ engineers, designers, analysts, and advisors solving complex business challenges through technology. By approaching our work through a variety of cultures and perspectives, we take calculated risks to design and develop innovative solutions that will have a positive impact tomorrow.

     

    Interesting Facts:

    • Over 200 team members
    • Fully remote β€” we let people work where they work best.
    • We work with clients who value our opinion and thought leadership, and where we can make a meaningful contribution to architectural decisions, engineering decisions, and product decisions.
    • We have a strong social agenda and promote diversity and inclusion, and participate in a variety of charity initiatives throughout the year.
    • We have fun team-building activities.
    • Since we are rapidly growing, the ability to grow and advance your career is available and at a fairly quick rate.


    Must Haves:

    • Strong experience with Python, Java, or other programming languages
    • Advanced knowledge of SQL, including complex queries, query modularization, and optimization for performance and readability
    • Familiarity with the modern data stack and cloud-native data platforms, such as Snowflake, BigQuery, or Amazon Redshift
    • Hands-on experience with dbt (data build tool) for data modeling and transformations
    • Experience with data orchestration tools, such as Airflow or Dagster

    ‍

    Nice to Have:

    • Experience with GitOps, continuous delivery for data pipelines
    • Experience with Infrastructure-as-Code tooling (Terraform)

    ‍

    Key Responsibilities:

    • Design and build a data platform that standardizes data practices across multiple internal teams
    • Support the entire data lifecycle
    • Build and maintain integrations across data processing layers, including ingestion, orchestration, transformation, and consumption
    • Collaborate closely with cross-functional teams to understand data needs and ensure the platform delivers value
    • Document architectures, solutions, and integrations to promote best practices, maintainability, and usability
    More
  • Β· 33 views Β· 2 applications Β· 12d

    Sr Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    You’ll take ownership of a large-scale AWS data platform powering analytics for thousands of hotels and restaurants worldwide. This is a hands-on role where your work directly impacts business decisions across the hospitality industry β€” not internal...

    You’ll take ownership of a large-scale AWS data platform powering analytics for thousands of hotels and restaurants worldwide. This is a hands-on role where your work directly impacts business decisions across the hospitality industry β€” not internal dashboards nobody uses.

    We’re looking for someone who doesn’t just build pipelines β€” but runs them, fixes them, and makes them bulletproof.

     

    About the Product

    A hospitality technology company operating a data analytics platform serving:

    • 2,500+ hotels
    • 500+ restaurants

    The system processes operational and performance data, delivering insights to product and analytics teams who rely on it daily.

     

    Your Mission

    Own and operate the AWS data infrastructure:

    • Build scalable, production-grade data pipelines
    • Ensure reliability, performance, and cost-efficiency
    • Keep everything running smoothly in real production environments

    This is not a β€œdesign slides and disappear” role β€” it’s real ownership of real data systems.

     

    What You’ll Be Doing

    Data Engineering & Pipelines

    • Build and operate Spark / PySpark workloads on EMR and Glue
    • Design end-to-end pipelines:
      API / DB / file ingestion β†’ transformation β†’ delivery to analytics consumers
    • Implement data validation, monitoring, and quality checks
    • Optimize pipelines for performance, cost, and scalability

     

    Infrastructure & Operations

    • Manage AWS infrastructure using Terraform
    • Monitor via CloudWatch
    • Debug production failures and implement preventive solutions
    • Maintain IAM and security best practices

     

    Collaboration

    • Work closely with product and analytics teams
    • Define clear data contracts
    • Deliver reliable datasets for BI and analytics use cases

     

    Must-Have Experience

    • 5+ years of hands-on data engineering in production
      (actual pipelines running in production, not only architecture work)
    • Strong Spark / PySpark
    • Advanced Python
    • Advanced SQL
    • AWS data stack: EMR, Glue, S3, Redshift (or similar), IAM, CloudWatch
    • Infrastructure as Code with Terraform
    • Experience debugging and stabilizing production data systems

     

    Nice to Have

    • Kafka or Kinesis (streaming)
    • Airflow or similar orchestration tools
    • Experience supporting BI tools and analytics teams

     

    What We Care About

    • You’ve handled pipeline failures in production β€” and learned from them
    • You prioritize data correctness, not just speed
    • You write maintainable, readable code
    • You understand AWS cost and scaling trade-offs
    • You avoid over-engineering β€” and ship what delivers value
    More
  • Β· 72 views Β· 18 applications Β· 12d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B2
    We are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and...

    We are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and machine learning applications. Knowledge of the healthcare industry and life sciences is a plus.

    Key Responsibilities

    • Design, develop, and maintain scalable data pipelines for large-scale analytics platforms.
    • Implement cloud-based solutions using Azure and AWS, ensuring reliability and performance.
    • Work closely with data scientists and AI/ML teams to optimize data workflows.
    • Ensure data quality, governance, and security across platforms.
    • Collaborate with cross-functional teams to integrate data solutions into business processes.

    Required Qualifications

    • Bachelor's degree (or higher) in Computer Science, Engineering, or a related field.
    • 3+ years of experience in data engineering, big data processing, and cloud-based architecture.
    • Strong proficiency in cloud services (Azure, AWS) and distributed computing frameworks.
    • Mandatory hands-on experience with Databricks (UC, DLTs, Delta Sharing, etc.)
    • Expertise in SQL and database management systems (SQL Server, MySQL, etc.).
    • Experience with data modeling, ETL processes, and data warehousing solutions.
    • Knowledge of AI and machine learning concepts and their data requirements.
    • Proficiency in Python, Scala, or similar programming languages.
    • Basic knowledge of C# and/or Java programming.
    • Familiarity with DevOps, CI/CD pipelines.
    • High-level proficiency in English (written and spoken).

    Preferred Qualifications

    • Experience in the healthcare or life sciences industry.
    • Understanding of regulatory compliance related to healthcare data (HIPAA, GDPR, etc.).
    • Familiarity with interoperability standards such as HL7, FHIR, and EDI.
    More
  • Β· 80 views Β· 1 application Β· 15d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    We’re looking for a highly skilled Data Expert! Product | Remote ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions. ...

     

    πŸ”₯ We’re looking for a highly skilled Data Expert!πŸ”₯

     

    Product | Remote

     

    ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions.

     

    This role combines data governance and development. You’ll build reliable data pipelines, improve observability, and uncover meaningful patterns that guide how we grow and evolve.

     

    You’ll work closely with product and technical teams to support data collection, processing, and consistency across systems.

     

    What you’ll do 

    • Analyze product and user behavior to uncover trends, bottlenecks, and opportunities.
    • Build and maintain data pipelines and ETL processes.
    • Design and optimize data models for new features and analytics (e.g., using dbt).
    • Work with event-driven architectures and standards like AsyncAPI and CloudEvents.
    • Collaborate with engineers to improve data quality, consistency, and governance across systems.
    • Use observability and tracing tools (e.g., OpenTelemetry) to monitor and improve performance.
    • Support existing frontend and backend systems related to analytics and data processing.
    • Build and maintain datasets for analytics and reporting.

     

    You’re a great fit if you have 

    • 5+ years of software engineering experience, with 3+ years focused on data engineering.
    • Strong SQL skills and experience with data modeling (dbt preferred).
    • Strong proficiency with Node.js, React, JavaScript, and TypeScript.
    • Proven experience in data governance and backend systems.
    • Familiarity with columnar databases or analytics engines (ClickHouse, Postgres, etc.).
    • Strong analytical mindset, attention to detail, and clear communication.
    • Passionate about clarity, simplicity, and quality in both data and code.
    • English proficiency: Upper-Intermediate or higher.

     

    How you’ll know you’re doing a great job

    • Data pipelines are trusted, observable, and performant.
    • Metrics and dashboards are used across teams β€” not just built once.
    • Teams make better product decisions, faster, because of your insights.
    • Data pipelines are trusted, observable, and performant.
    • You’re the go-to person for clarity when questions arise about β€œwhat the data says.”

     

    About Redocly

    Redocly builds tools that accelerate API ubiquity. Our platform helps teams create world-class developer experiences β€” from API documentation and catalogs to internal developer hubs and public showcases. We're a globally distributed team that values clarity, autonomy, and craftsmanship. You'll work alongside people who love developer experience, storytelling, and building tools that make technical work simpler and more joyful.

    Headquarter – Austin, Texas, US. There is also an office in Lviv, Ukraine.

     

    Redocly is trusted by leading tech, fintech, telecom, and enterprise teams to power API documentation and developer portals. Redocly’s clients range from startups to Fortune 500 enterprises.

    https://redocly.com/

     

    Working with Redocly

    • Team: 4-6 people (middle-seniors)
    • Team’s location: Ukraine&Europe
    • There are functional, product, and platform teams and each has its own ownership, and line structure, and teams themselves decide when to have weekly meetings.
    • Cross-functional teams are formed for each two-month cycle, giving team members the opportunity to work across all parts of the product.
    • Methodology: Shape Up

     

    Perks

    • Competitive salary based on your expertise 
    • Full remote, though you’re welcome to come to the office occasionally if you wish.
    • Cooperation on a B2B basis with a US-based company (for EU citizens) or under a gig contract (for Ukraine).
    • After a year of working with the company, you can buy a certain number of company’s shares
    • Around 30 days of vacation (unlimited,  but let’s keep it reasonable)
    • 10 working days of sick leave per year
    • Public holidays according to the standards
    • No trackers and screen recorders
    • Working hours – EU/UA timezone. Working day – 8 hours. Mostly they start working from 10-11 am
    • Equipment provided – MacBooks (M1 – M4)
    • Regular performance reviews

     

    Hiring Stages

    • Prescreening (30-45 min)
    • HR Call (45 min)
    • Initial Interview (30 min)
    • Trial Day (paid)
    • Offer

     

    If you are an experienced Data Scientist, and you want to work on impactful data-driven projects, we’d love to hear from you! 


    Apply now to join our team!

    More
  • Β· 59 views Β· 6 applications Β· 15d

    Lead Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience Β· English - B2
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field
    • At least 1+ year of experience as Lead\Architect
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

       

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

       

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications
    More
  • Β· 26 views Β· 4 applications Β· 15d

    Senior Data Engineer

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones. Responsibilities: - Build and maintain scalable Zero ETL pipelines. -...

    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones.

    Responsibilities:
    - Build and maintain scalable Zero ETL pipelines.
    - Design and optimize data warehouses and data lakes on AWS (Glue, Firehose, Lambda, SageMaker).
    - Work with structured and unstructured data, ensuring quality and accuracy.
    - Optimize query performance and data processing workflows (Spark, SQL, Python).
    - Collaborate with engineers, analysts, and business stakeholders to deliver data-driven solutions.

    Requirements:
    - 5+ years of experience in Data Engineering.
    - Advanced proficiency in Spark, Python, SQL.
    - Expertise with AWS Glue, Firehose, Lambda, SageMaker.
    - Experience with ETL tools (dbt, Airflow etc.).
    - Background in B2C companies is preferred.
    - JavaScript and Data Science knowledge are a plus.
    - Degree in Computer Science (preferred, not mandatory).

    We offer:
    - remote time job, B2B contract
    - 12 sick leaves and 18 paid vacation business days per year
    - Comfortable work conditions (including MacBook Pro and Dell monitor on each workplace)
    - Smart environment
    - Interesting projects from renowned clients
    - Flexible work schedule
    - Competitive salary according to the qualifications
    - Guaranteed full workload during the term of the contract
     

    More
  • Β· 42 views Β· 4 applications Β· 15d

    Senior Data Engineer (Data Competency Center)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire...

    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire innovation.

     

    At Sigma Software, we value expertise, continuous learning, and a supportive environment where your career path is shaped around your strengths. You’ll be part of a collaborative team, gain exposure to cutting-edge technologies, and work in an inclusive culture that fosters growth and innovation.

    Project

    Our Data Engineering Center of Excellence (CoE) is a specialized unit focused on designing, building, and optimizing data platforms, pipelines, and architectures. We work across diverse industries, leveraging modern data stacks to deliver scalable, secure, and cost-efficient solutions.

    Job Description

    • Collaborate with clients and internal teams to clarify technical requirements and expectations
    • Implement architectures using Azure or AWS cloud platforms
    • Design, develop, optimize, and maintain squad-specific data architectures and pipelines
    • Discover, analyze, and organize disparate data sources into clean, understandable data models
    • Evaluate new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans for analytical data engineering skills, standards, and processes

    Qualifications

    • 5+ years of experience with Python and SQL
    • Hands-on experience with AWS services (API Gateway, Kinesis, Athena, RDS, Aurora)
    • Proven experience building ETL pipelines for analytics/internal operations
    • Experience developing and integrating APIs
    • Solid understanding of Linux OS
    • Familiarity with distributed applications and DevOps tools
    • Strong troubleshooting/debugging skills
    • English level: Upper-Intermediate
    • WILL BE A PLUS:
    • 2+ years with Hadoop, Spark, or Airflow
    • Experience with DAGs/orchestration tools
    • Experience with Snowflake-based data warehouses
    • Experience developing event-driven data pipelines
    • Personal Profile

    PERSONAL PROFILE:

    • Passion for data processing and continuous learning
    • Strong problem-solving skills and analytical thinking
    • Ability to mentor and guide team members
    • Effective communication and collaboration skills
    More
  • Β· 46 views Β· 1 application Β· 16d

    Data Engineer (Relocation to Spain)

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· English - None
    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team, assistance...

    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
    We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company.

    Working with big data, strong team, assistance with family relocation, TOP conditions.

     

    Main Responsibilities

    β€” Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    β€” Develop and support data lake/data warehouse architectures
    β€” Integrate internal and external data sources/APIs into unified data systems
    β€” Ensure data quality, observability, and monitoring of pipelines
    β€” Collaborate with backend and DevOps engineers on infrastructure and deployment
    β€” Optimize query performance and data processing latency across systems
    β€” Maintain documentation and contribute to internal data engineering standards
    β€” Implement data access layers and provide well-structured data for downstream teams

     

    Mandatory Requirements

    β€” 3+ years of experience as a Data Engineer in high-load or data-driven environments
    β€” Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    β€” Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    β€” Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    β€” Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    β€” Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    β€” Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    β€” Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    β€” Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    β€” English β€” at least intermediate (for documentation & communication with tech teams)

     

    We offer

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

     

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
     

    More
  • Β· 40 views Β· 1 application Β· 16d

    Data Engineer (with Azure)

    Full Remote Β· EU Β· 3 years of experience Β· English - B1
    Main Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 3+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – 2+ years of experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 34 views Β· 0 applications Β· 16d

    Senior Data Engineer

    Ukraine Β· 4 years of experience Β· English - B2
    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior...

    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior and enable more precise targeting and measurement. We work on high-end / high-performance / high-throughput systems for in-time analysis of data for autonomous driving and other big data applications e.g. for E-commerce.


    Job Description

    You have 4+ years of experience on similar position.

    You have significant experience with Python. Familiarity with Java or Scala is a plus.

    Hands-on experience building scalable solutions in AWS.

    Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop, MongoDB, AWS Batch, AWS Glue, Athena, Airflow, dbt).

    Excellent SQL and data transformation skills.

    Excellent written and verbal communication skills with an ability to simplify complex technical information.

    Experience guiding and mentoring junior team members in a collaborative environment.


     

    Job Responsibilities

    Work in a self-organised agile team with a high level of autonomy, and you will actively shape your team's culture.

    Design, build, and standardise privacy-first big data architectures, large-scale data pipelines, and advanced analytics solutions in AWS.

    Develop complex integrations with third-party partners, transferring terabytes of data.

    Align with other Data experts on data (analytics) engineering best practices and standards, and introduce those standards and data engineering expertise to the team in order to enhance existing data pipelines and build new ones.

    Successfully partner up with the Product team to constantly develop further and improve our platform features.

    More
  • Β· 46 views Β· 7 applications Β· 16d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B1
    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.



    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 44 views Β· 3 applications Β· 16d

    Data Engineer ( with Snowflake and insurance companies experience)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Senior Data Engineer with experience in insurance & Snowflake. On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer. Our client is a well-established US-based financial services organization with a long history in the...

    Senior Data Engineer with experience in insurance & Snowflake.
    On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer.
    Our client is a well-established US-based financial services organization with a long history in
    the insurance and long-term financial security space. The company operates as a member-oriented, non-profit institution, focusing on life insurance, retirement programs, and community-driven initiatives. The product ecosystem is mature, data-heavy, and highly
    regulated, with a strong emphasis on reliability, accuracy, and compliance.
    We are looking for a Senior Data Engineer with deep Snowflake expertise and hands-on experience in insurance companies. 

    This domain background is a must-have requirement.
     

    Requirements:
    - 5+ years of experience as a Data Engineer.
    - Strong hands-on experience with Snowflake (data modeling, optimization, ELT/ETL pipelines).
    - Mandatory experience in the insurance domain (policies, claims, underwriting, actuarial, or related datasets).
    - Solid experience with data warehousing and analytical platforms.
    - Proven ability to build and maintain scalable, reliable data pipelines.
    - Advanced SQL skills.
    - Experience working with cloud platforms (AWS / GCP / Azure).
    - Upper-Intermediate+ / Advanced English – direct communication with US stakeholders.
     

    Nice to have:
    - Experience with dbt, Airflow, or similar orchestration tools.
    - Background in regulated or compliance-heavy environments.
    - Previous experience working with US clients or distributed teams.
     

    In return we offer
    The friendliest community of like-minded IT-people.
    Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies.


    Languages
    English - B2-Π‘1

    More
  • Β· 7 views Β· 0 applications Β· 16d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· English - B2
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
  • Β· 64 views Β· 6 applications Β· 17d

    Senior Data Engineer (for Ukrainians in EU)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About our Customer It's a European company turning bold ideas into reality. We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech,...

    About our Customer
    It's a European company turning bold ideas into reality.  We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech, FinTech, ClimateTech, SocialTech, PropTech , and more.
     

    Founded in Ukraine with a Scandinavian-inspired culture.
     

    We value skills, passion, excellence, equality, openness, mutual respect, and trust. You’ll join a growing company, work with creative, inspiring colleagues, explore cutting-edge technologies, and build AI-driven solutions that make a real impact.
     

    Project
    Our client is an Icelandic energy company  providing electricity, geothermal water, cold water, carbon storage, and optic networks.
     

    We are looking for a Senior Data Engineer ready to dive deep into data, solve challenging problems, and create maximum value for internal stakeholders. You’ll handle complex issues, design long-term improvements, and develop new data pipelines as part of an enthusiastic and collaborative Data Engineering team.
     

    Tech Stack:
    πŸ—„οΈ MS SQL Server | Azure/Databricks | Power BI, Tableau | Microsoft BI stack (SSRS, SSIS, SSAS) | TimeXtender | exMon
     

    Responsibilities:

    • Develop & maintain enterprise data warehouse, data marts, staging layers, and transformation logic
    • Design, implement & optimize ETL/ELT pipelines (SQL Server, Azure, Databricks)
    • Build & maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets)
    • Improve BI environment and ensure data is reliable and actionable
    • Implement controlled data delivery processes to analysts & BI specialists
    • Support data quality frameworks, testing & validation procedures
    • Investigate 3rd-line operational issues & guide 2nd-line support
    • Run stakeholder workshops to translate business needs into elegant technical solutions
    • Identify opportunities to improve data usability, value, and automation
    • Document all processes, models, and pipelines in Confluence
    • Collaborate with on-site Team Lead for sprint planning, backlog refinement, and prioritization
       

    Requirements

    • Bachelor’s or Master’s in Computer Science or related field
    • 6+ years of experience with DWH solutions & data pipelines
    • Strong SQL development skills (MS SQL Server preferred)
    • ETL/ELT workflow experience using:
      • Databricks
      • Azure Data Factory / cloud orchestration tools
      • Azure data platform services (storage, compute, data lake)
    • Solid understanding of data warehouse architectures & dimensional modeling
    • Experience with data quality checks, validation, and monitoring
    • Understanding of BI concepts & ability to prepare user-friendly datasets
    • Strong communication, able to explain data concepts to stakeholders
    • Willingness to document solutions and share knowledge
    • Experience in distributed, cross-cultural Agile environments
    • English: upper-intermediate / advanced


    πŸ”Ή Bonus / Nice to Have

    • Python or similar for data processing
    • Performance tuning for SQL or data pipelines
    • Interest in visual clarity & usability of data models
    More
  • Β· 43 views Β· 1 application Β· 17d

    Senior Data Engineer

    Full Remote Β· EU Β· 6 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Senior Data Engineer who will be responsible for developing, enhancing, and maintaining enterprise data warehouse, data platform, and analytical data flows. The role supports all company’s subsidiaries and contributes to creating maximum value from data for internal stakeholders. 

    The qualified candidate will work as part of the Data Engineering team and will handle complex 3rd-line issues, long-term improvements, and new data development. The work will be aligned with the team’s structured 3-week planning cycles, and tight collaboration with the on-site Team Lead is expected. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING EXCITING CHALLENGES 

    • Develop and maintain the enterprise data warehouse, data marts, staging layers, and transformation logic 
    • Design, implement, and optimize ETL/ELT pipelines (SQL Server, Azure data components, Databricks, etc.) 
    • Build and maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets) 
    • Develop and improve the BI environment and the underlying data processes used by analysts across the company 
    • Implement processes for controlled, reliable data delivery to BI specialists, analysts, and modelling teams (e.g., forecasting, scenario modelling) 
    • Support data quality frameworks and implement testing/validation procedures 
    • Investigate and resolve escalated 3rd-line operational issues and guide 2nd-line support in root cause analysis 
    • Conduct stakeholder workshops to understand business requirements and translate them into technical data solutions 
    • Identify opportunities to improve data usability, analytical value, and process automation 
    • Document data processes, models, pipelines, and architectural decisions in Confluence 
    • Collaborate with the on-site Team Lead during sprint planning, backlog refinement, and prioritization. 

     

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 6+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL development skills, preferably in MS SQL Server 
    • Experience building and maintaining ETL/ELT workflows using: 
    • Databricks 
    • Azure Data Factory or similar cloud-based data orchestration tools 
    • Azure data platform services (e.g., storage, compute, data lake formats) 
    • Solid understanding of data warehouse architectures and dimensional modelling 
    • Experience with data quality checks, validation frameworks, and monitoring 
    • Understanding of BI concepts and ability to prepare user-friendly analytical datasets 
    • Experience collaborating with business stakeholders and capturing analytical or operational data requirements 
    • Strong communication skills and the ability to explain data concepts clearly 
    • Willingness to document solutions and share knowledge within the team 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 
    • Experience with performance tuning for SQL or data pipelines 
    • Interest in visual clarity, usability of data models, and BI-driven design 

     

     

     WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
Log In or Sign Up to see all posted jobs