Jobs Data Engineer

162
  • Β· 867 views Β· 73 applications Β· 3d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· English - B1
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 84 views Β· 13 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics.
    You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.

    If you are passionate about data optimization, system performance, and architecture, we’re waiting for your CV!

    Requirements:
    β€’ 4+ years of commercial experience with Python;
    β€’ Advanced experience with SQL DBs (optimisations, monitoring, etc.);
    β€’ PostgreSQL β€” must have;
    β€’ Solid understanding of ETL principles (architecture/ monitoring/ alerting/search and resolve bottlenecks);
    β€’ Experience with Message brokers: Kafka/ Redis;
    β€’ Experience with Pandas;
    β€’ Familiar with AWS infrastructure (boto3, S3 buckets, etc);
    β€’ Experience working with large volumes of data;
    β€’ Understanding the principles of medallion architecture.   

    Will Be a Plus:
    β€’ Understanding noSQL DBs (Elastic);
    β€’ TimeScaleDB;
    β€’ PySpark;
    β€’ Experience with e-commerce or fintech.   
     

    Key Responsibilities:

    β€’ Develop and maintain a robust and scalable data processing architecture using Python.

    β€’  Design, optimize, and monitor data pipelines using Kafka and AWS SQS.

    β€’  Implement and optimize ETL processes for various data sources.

    β€’  Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).

    β€’  Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.

    β€’  Proactively identify bottlenecks and suggest technical improvements.

     

     We offer:

    β€’  Working in a fast-growing company;

    β€’  Great networking opportunities with international clients, challenging tasks;

    • Personal and professional development opportunities;
    • Competitive salary fixed in USD;
    • Paid vacation and sick leaves;
    • Flexible work schedule;
    • Friendly working environment with minimal hierarchy;
    • Team building activities, corporate events.


     

    More
  • Β· 43 views Β· 3 applications Β· 4d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    A US-based MSP is looking for an experienced Data Engineer to design, implement, and maintain scalable data pipelines and cloud-native solutions. This role requires deep expertise in Python programming, cloud services, and SQL-based data modeling, with a...

    A US-based MSP is looking for an experienced Data Engineer to design, implement, and maintain scalable data pipelines and cloud-native solutions. This role requires deep expertise in Python programming, cloud services, and SQL-based data modeling, with a strong emphasis on automation, reliability, and security.

    COMPANY 
    Atlas Technica β€” the US-based IT service provider for the hedge fund vertical. Founded and headquartered in New York in 2016, and rapidly growing all along the way. These days comprises 300+ engineers and 10+ established locations in the US, UK, Ukraine, Hong Kong, and Singapore. 

    Location/Type: Remote (Ukraine only)
    Hours: UA timezone, flexible

    RESPONSIBILITIES:β€―

    • Build and maintain efficient ETL workflows using Python 3, applying both object-oriented and functional paradigms.
    • Write comprehensive unit, integration, and end-to-end tests; troubleshoot complex Python traces.
    • Automate deployment and integration processes.
    • Develop Azure Functions, configure and deploy Storage Accounts and SQL Databases.
    • Design relational schemas, optimize queries, and manage advanced MSSQL features including temporal tables, external tables, and row-level security.
    • Author and maintain stored procedures, views, and functions.
    • Collaborating with cross-functional teams
       

    REQUIREMENTS:β€―

    • 5+ years as a Data Engineer 
    • Strong Python (OOP + functional, production code, ETL workflows) 
    • Testing (pytest, unit/integration tests) 
    • ETL orchestration (Airflow, Dagster, or Prefect) 
    • Advanced SQL (optimization, schema design, complex queries) 
    • Cloud experience + serverless patterns (Azure Functions, Lambda, or Cloud Functions)
    • Git, CI/CD


      NICE TO HAVEs: 

      • Microsoft SQL Server / T-SQL experience (temporal tables, row-level security, external tables)
      • Azure-specific SDKs and services (Azure Functions SDK, Key Vault, ARM templates)
      • Microsoft certifications

     

    WE OFFER:

    • Direct long-term contract with a US-based company
    • Full-time remote role aligned with EST
    • B2B set-up via SP (FOP in $USD)
    • Competitive compensation
    • Annual salary reviews and performance-based bonuses
    • Company equipment provided for work
    • Professional, collaborative environment with the ability to influence strategic decisions
    • Opportunities for growth within a scaling global organization
    More
  • Β· 33 views Β· 4 applications Β· 4d

    Senior Data Engineer (Scala) β€” Tieto Tech Consulting (m/f/d)

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Tieto Tech Consulting is inviting a talented Data Engineer to join our growing team and support our customer BICS, a global telecommunication enabler with a physical network spanning the globe. In this role, you will work on the BICS Voice and CC Value...

    Tieto Tech Consulting is inviting a talented Data Engineer to join our growing team and support our customer BICS, a global telecommunication enabler with a physical network spanning the globe. In this role, you will work on the BICS Voice and CC Value Streams, delivering qualified customer and network support by designing, building, and optimizing large-scale data pipelines within the telecom domain. The position requires strong expertise in Scala Spark, Databricks, and AWS cloud services, and focuses on developing high-performance data platforms that enable network analytics, customer insights, real-time monitoring, and regulatory reporting.

     

    Key Responsibilities

    • Design, develop, and maintain scalable batch data pipelines using Scala, Databricks Spark, Databricks SQL and Airflow
    • Implement optimized ETL/ELT processes to ingest, cleanse, transform, and enrich large volumes of telecom network, usage, and operational data
    • Ensure pipeline reliability, observability, and performance tuning of Spark workloads
    • Build and manage data architectures leveraging AWS services (such as but not limited to) S3, Lambda, IAM, and CloudWatch
    • Implement infrastructure-as-code using Terraform
    • Ensure security best practices and compliance with telecom regulatory requirements (GDPR, Data sovereignty, retention)
    • Collaborate with cross-functional teams (Architecture, DevOps, Network Engineering, Business Intelligence)
    • Document system designs, data flows, and best practices

     

    Requirements

    • 4+ years of experience as a Data Engineer or Big Data Developer
    • Strong proficiency in Scala and functional programming concepts
    • Advanced experience with Apache Spark (batch processing using Data Frame API and low-level Spark API’s, performance tuning, cluster optimization)
    • Experience with optimized SQL-based data transformations for analytics and machine learning workloads
    • Hands-on experience with Databricks including notebooks, jobs, Delta Lake, Unity Catalog, and MLflow (nice to have)
    • Solid understanding of CI/CD practices with Git, Jenkins/Gitlab Actions
    • Strong AWS skills: S3, Lambda, IAM, CloudWatch, and related services
    • Knowledge of distributed systems, data governance, and security best practices
    • Experience with Airflow integration with AWS services for end-to-end orchestration across cloud data pipelines
    • Experience with IaC tools: Terraform or CloudFormation
    • Experience with Python is a Plus
    • Experience with DBT is a Plus
    • Experience with Snowflake is s Plus

     

    Soft Skills

    • Strong analytical and problem-solving skills
    • High degree of ownership and a mindset for continuous improvement
    • Quality oriented, pragmatic and solution oriented
    • Excellent communication and teamwork abilities
    • Ability to translate business requirements into technical solutions
    • Experience in telecom sector is a plus
    • Experience with an agile way of working is a plus
    • English proficiency
    More
  • Β· 23 views Β· 2 applications Β· 4d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Project description The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future....

    Project description

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    Responsibilities

    Design and develop data pipelines using Snowflake and Snow pipe for real-time and batch ingestion.

    Implement CI/CD pipelines in Azure DevOps for seamless deployment of data solutions.

    Automate DBT jobs to streamline transformations and ensure reliable data workflows.

    Apply data modeling techniques including OLTP, OLAP, and Data Vault 2.0 methodologies to design scalable architectures.

    Document data models, processes, and workflows clearly for future reference and knowledge sharing.

    Build data tests, unit tests, and mock data frameworks to validate and maintain reliability of data solutions.

    Develop Streamlit applications integrated with Snowflake to deliver interactive dashboards and self-service analytics.

    Integrate SAP data sources into Snowflake pipelines for enterprise reporting and analytics.

    Leverage SQL expertise for complex queries, transformations, and performance optimization.

    Integrate cloud services across AWS, Azure, and GCP to support multi-cloud data strategies.

    Develop Python scripts for ETL/ELT processes, automation, and data quality checks.

    Implement infrastructure-as-code solutions using Terraform for scalable and automated cloud deployments.

    Manage RBAC and enforce data governance policies to ensure compliance and secure data access.

    Collaborate with cross-functional teams including business analysts, and business stakeholders to deliver reliable data solutions.

    Skills

    Must have

    Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).

    Hands-on experience with Python , SQL , Jinja , JavaScript for data engineering tasks.

    CI/CD expertise using Azure DevOps (build, release, version control).

    Experience automating DBT jobs for data transformations.

    Experience building Streamlit applications with Snowflake integration.

    Cloud services knowledge across AWS (S3, Lambda, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Pub/Sub).

    Nice to have

    Cloud certifications is a plus

    Languages

    English: B2 Upper Intermediate

    More
  • Β· 55 views Β· 14 applications Β· 4d

    Middle Cloud/Data Engineer

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    Metamindz is a fast-growing UK-based IT software company. We support global clients by providing fractional CTOs-as-a-service, building digital products, hiring exceptional technical talent, and conducting in-depth tech due diligence. We’re currently...

    Metamindz is a fast-growing UK-based IT software company. We support global clients by providing fractional CTOs-as-a-service, building digital products, hiring exceptional technical talent, and conducting in-depth tech due diligence.

     

    We’re currently looking for a Cloud & Data Engineer (GCP / IoT) to join one of our startup clients in a part-time engagement. This is a full-time opportunity for a hands-on engineer who can take ownership of cloud data platforms and backend systems, working with high-volume IoT data and real-time analytics in production environments.

     

    Responsibilities:

     

    • Own and operate the cloud-based backend and data platform supporting large-scale IoT deployments
    • Architect, build, and maintain high-volume data ingestion pipelines using GCP services (BigQuery, Dataflow, Pub/Sub)
    • Design and manage streaming and batch data workflows for real-time and historical analytics
    • Define data storage, querying, retention, and archiving strategies across warehouses and data lakes
    • Ensure backend services, APIs, and data pipelines are secure, scalable, observable, and fault-tolerant
    • Set up monitoring, logging, alerting, and recovery strategies for event-driven workloads
    • Collaborate closely with the CTO, embedded engineers, and product teams to align device capabilities with cloud and data architecture
    • Contribute to data platform evolution, including governance, access policies, and metadata management

     

    Requirements:

     

    • 3–5 years of commercial engineering experience in cloud, data, or backend roles
    • Strong hands-on experience with GCP and its data ecosystem (BigQuery, Dataflow, Pub/Sub)
    • Solid experience with relational databases (Postgres, MySQL), including schema design, migrations, indexing, and scaling strategies
    • Proven experience building and maintaining data pipelines, particularly for IoT or time-series data
    • Hands-on experience with Python (Node.js is a plus)
    • Experience designing and consuming APIs in distributed or microservices-based systems
    • Familiarity with CI/CD pipelines, environment management, and Infrastructure as Code (Terraform)
    • Good understanding of cloud security, IAM, and best practices for production systems
    • Ability to work independently in a startup environment and make pragmatic technical decisions

     

    Nice to Have:

     

    • Google Professional Data Engineer certification
    • Experience with orchestration tools such as Airflow / Cloud Composer
    • Exposure to applied ML or AI use cases (e.g. anomaly detection, forecasting on IoT data)
    • Experience using managed ML services like GCP Vertex AI

     

    What We Offer:

     

    • Opportunity to work on a real-world, IoT-powered product with visible impact
    • High ownership and influence over technical architecture and data strategy
    • Collaborative startup environment with direct access to decision-makers
    • Modern cloud stack and meaningful engineering challenges around scale and reliability
    • Competitive compensation aligned with experience and responsibilities

     

    How to Apply:

     

    Please send a short blurb about yourself β€” and tell us your favorite ice cream flavor (mine is cherry πŸ’)

    More
  • Β· 59 views Β· 9 applications Β· 4d

    Senior Data Engineer (with relocation to Cyprus)

    Full Remote Β· Worldwide Β· 6 years of experience Β· English - B2
    About the project: We are looking for a Senior Data Engineer to take ownership and evolve both our Data Warehouse and core databases within a microservices-based, multi-tenant application. This role encompasses more than just analytics and includes broad...

    About the project:

    We are looking for a Senior Data Engineer to take ownership and evolve both our Data Warehouse and core databases within a microservices-based, multi-tenant application. This role encompasses more than just analytics and includes broad responsibility for database architecture, ensuring data consistency, and managing technical debt across production systems. It is a hands-on opportunity focused on long-term stewardship of the data layer, working closely with backend engineers and product teams to deliver scalable, reliable, and maintainable data infrastructure.

    Please note that this position requires Cyprus-based candidates or readiness to relocate to Cyprus after the probation period. We provide full support throughout the relocation process, along with a relocation bonus 🎁

     

    A new team member will be in charge of:

    • Designing, developing, and maintaining scalable data warehouse solutions.
    • Building and optimizing ETL/ELT pipelines for efficient data integration.
    • Designing and implementing data models to support analytical and reporting needs.
    • Ensuring data integrity, quality, and security across all pipelines.
    • Optimizing data performance and scalability using best practices.
    • Working with big data technologies such as Redshift.
    • Collaborating with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implementing CI/CD pipelines for data workflows.
    • Monitoring, troubleshooting, and improving data processes and system performance.
    • Staying updated with industry trends and emerging technologies in data engineering.
    • Taking ownership of core production databases across a microservices, multi-tenant application.

       

    Does this relate to you?

    • 6+ years of experience in Data Engineering or a related field.
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.
    • Experience managing production databases in microservices environments.
    • Experience designing and supporting multi-tenant data models.
    • English proficiency at an upper-intermediate level.

     

    Ready to try your hand? Send your CV without a doubt!

    More
  • Β· 11 views Β· 0 applications Β· 4d

    Senior Analytics Engineer

    Office Work Β· Poland Β· Product Β· 4 years of experience Β· English - C1
    Location: Warsaw, Poland Format: Full-time Type of contract: B2B / Ukrainian FOP Seniority: Senior About Us At EPC Network, we’re not just a digital marketing company; we’re a platform for career transformation and personal growth. Our people-first...

    Location: Warsaw, Poland
    Format: Full-time
    Type of contract: B2B / Ukrainian FOP
    Seniority: Senior


    🧠 About Us
    At EPC Network, we’re not just a digital marketing company; we’re a platform for career transformation and personal growth. Our people-first approach shapes our corporate culture, fostering a team of passionate, smart, and energetic individuals who strive to grow professionally in an international, fast-paced environment.

    We believe our Team is the foundation of our success, and now we’re looking for a Senior Analytics Engineer to join our Analytics & Growth direction!


    🎯 Job Description
    As a Senior Analytics Engineer, you’ll be responsible for building and owning the data, tracking, and analytics foundation for a large and rapidly scaling portfolio of newsletters.

    This role is critical to ensuring our analytics stack is accurate, scalable, and automation-ready across 100+ newsletters. You’ll work not only with dashboards and reporting, but also with data pipelines, APIs, cloud tools, and AI-driven workflows to support performance, growth, and monetization decisions.

    You should think like a marketer, build like a data engineer, and act like an owner. This role has a clear growth path toward building and leading an analytics/data team as our data complexity grows.


    πŸ”§ Key Responsibilities

    • Build and maintain dashboards and reporting systems (Looker Studio or similar BI tools)
    • Design, maintain, and optimize data pipelines (Google Sheets, BigQuery, APIs, Python scripts)
    • Track, analyze, and report key marketing, growth, and audience performance metrics
    • Identify anomalies, data inconsistencies, or tracking issues at early stages
    • Ensure clean, accurate, and scalable data structures across all sources
    • Support publishing, marketing, and growth teams with actionable insights
    • Explore and implement AI-driven analytics and automation solutions


    πŸ“Œ Requirements

    • Strong skills in Google Sheets / Excel (advanced formulas, large datasets)
    • Solid experience with SQL (queries, joins, aggregations)
    • Hands-on experience with Python for data processing, automation, or analysis
    • Experience with BI tools (Looker Studio or similar)
    • Experience working with Google Cloud Platform, especially BigQuery
    • Comfortable working with CSV exports, APIs, and structured/unstructured data
    • Understanding of marketing analytics, funnels, attribution, and audience metrics
    • Detail-oriented, fast, and reliable when working with large data volumes
    • Proactive mindset with strong problem-solving skills
    • English level: C1 or higher


    🀝 What it means to be part of our Team

    Your professional and personal development:
    πŸ™‹ Multinational and intercultural experience
    πŸ“š Corporate library
    πŸ’ͺ A world-class team to work with
    πŸŽ“ Growth opportunities
    πŸ’» Cutting-edge frameworks and technologies

    Well-being:
    πŸ’° Competitive salary
    🎳 Plenty of engaging team-building and social events
    🎁 Bonuses according to the policy
    β€πŸŒ΄ 21 paid vacation days & 14 paid sick leaves
    🧘 Work-life balance

    Working environment:
    🏒 Cozy office in Warsaw available for you whenever you need it
    πŸ₯ͺ Coffee, tea, Red Bull, sweets, fruits, and more snacks
    🧐 Adequate teammates


    πŸ’Œ Interested?
    We’re always on the lookout for passionate, driven, and curious people to join our team.
    If that sounds like you β€” we’d love to hear from you!

    Please make sure to include your Telegram nickname in the cover letter.

    Diamond, please, call out! We are waiting for you πŸ’Ž

    More
  • Β· 25 views Β· 3 applications Β· 4d

    Senior Data Engineer (AWS, E-commerce Analytics)

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - C1
    About the Project: We are looking for a Senior Data Engineer to take over and further develop our e-commerce analytics solution. The project focuses on building a "Product as a Service" (SaaS) platform that helps customers improve their business through...

    About the Project: We are looking for a Senior Data Engineer to take over and further develop our e-commerce analytics solution. The project focuses on building a "Product as a Service" (SaaS) platform that helps customers improve their business through data-driven insights. You will be responsible for the full lifecycle of data: from scraping and parsing to making it available for AI model training.

    More
  • Β· 95 views Β· 17 applications Β· 4d

    Data Engineering Lead

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - None
    About Traffic Label Traffic Label is a performance marketing and technology company with nearly two decades of experience driving engagement and conversion across the iGaming and digital entertainment sectors. We’re now building a Customer Data Platform...

    About Traffic Label
    Traffic Label is a performance marketing and technology company with nearly two decades of experience driving engagement and conversion across the iGaming and digital entertainment sectors.
    We’re now building a Customer Data Platform (CDP) on Snowflake and AWS - unifying player data across multiple brands to power automation, insights, and personalization.

    The Role
    We’re looking for a Data Engineering Lead to own the technical delivery and development of this platform. You’ll architect scalable pipelines, lead a small team, and ensure data reliability, accuracy, and performance.
    Team size: 3–4 engineers/analysts

    Key Responsibilities

    • Design and implement scalable data pipelines processing millions of events daily
    • Own Snowflake data warehouse architecture, optimization, and cost control
    • Lead the engineering team through delivery and performance improvements
    • Ensure >95% data accuracy and 99.9% pipeline uptime
    • Collaborate with marketing, analytics, and compliance teams to align data with business goals

    Requirements

    • 5+ years in data engineering, 2+ in leadership roles
    • Expert in Snowflake, SQL, and Python
    • Proficient with AWS (S3, Lambda, IAM) and orchestration tools (Airflow, dbt, etc.)
    • Strong understanding of data governance, cost optimization, and performance tuning
    • Experience with iGaming data, Kafka/Kinesis, or MLflow is a plus

    Why Join Us

    • Build a core data platform from the ground up
    • Competitive salary and performance bonuses
    • Flexible remote or hybrid work across Europe
    • Supportive, innovative, data-driven culture

    Ready to lead a data platform that powers smarter decisions across global iGaming brands?
    Apply now to join Traffic Label’s Data & Technology team.

    More
  • Β· 29 views Β· 10 applications Β· 5d

    Senior Data Engineer (Snowflake and Informatica)

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    We are looking for a Senior Data Engineer to worik with an existing enterprise Data Warehouse. The build, support, and enhance data pipelines and data assets while working under defined SLAs in a regulated utility environment. Who are we looking for? 5+...

    We are looking for a Senior Data Engineer to worik with an existing enterprise Data Warehouse. The build, support, and enhance data pipelines and data assets while working under defined SLAs in a regulated utility environment.

    Who are we looking for?
    ● 5+ years of experience in data engineering or data-centric roles
    ● Proven hands-on experience supporting production data platforms
    ● Strong experience with Snowflake, including: data structures and transformations, working with existing schemas and layered architectures.
    ● Solid experience with Informatica Cloud (IDMC / CDI): building and supporting ETL / ELT pipelines
    ● Good understanding of ETL / ELT concepts and data pipelines
    ● Experience working under SLAs and structured support processes
    ● Ability to investigate, diagnose, and resolve data incidents and defects
    ● Experience performing impact analysis and technical validation for changes
    ● Familiarity with Agile delivery and tools such as Jira
    ● Willingness to travel occasionally for business trips to the UK.
    ● Strong communication skills and ability to work closely with both technical and business stakeholders.
    ● Excellent proficiency in both verbal and written English communication skills.

     

    We offer:
    ● А place with friendly environment where you can reach your full potential and grow your career
    ● Flexible work schedules
    ● Work from home
    ● Social package: paid sick leave and vacation
    ● English courses, medical insurance, legal support, etc.

     

    As a Senior Data Engineer you will:
    ● Build, maintain, and support data pipelines using Informatica IDMC (CDI)
    ● Develop and support data transformations and data assets in Snowflake.
    ● Ensure stable operation of data pipelines in line with BAU and SLA requirements.
    ● Investigate and resolve production incidents and defects.
    ● Deliver approved service requests and incremental enhancements.
    ● Perform impact analysis and technical validation for data changes.
    ● Execute unit testing and support release activities.
    ● Produce and maintain technical documentation and operational artefacts.
    ● Collaborate closely with other Data Engineers, BI specialists, and stakeholders
    ● Operate within defined role boundaries, without ownership of business rules, data definitions, or platform configuration.


     

    Our client is a global energy company focused on renewable power generation and low-carbon energy solutions. Operating across multiple regions, the company develops, builds, and operates large-scale energy assets, including wind, solar, and hybrid power projects. The company works in a highly regulated environment, where data accuracy, traceability, and reliability are essential.

    More
  • Β· 233 views Β· 14 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B1
    We are looking for you! As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate,...

    We are looking for you!

    As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate, and craft state-of-the-art data solutions, we're keen to embark on this journey with you.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 6+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python, SQL;
    • Deep understanding and practical experience with cloud data warehouses (Snowflake, BigQuery, Redshift);
    • Extensive experience building data and ML pipelines;
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Deep understanding of Git and its workflows;
    • Open to collaborating with data scientists and businesses.

    Nice to have:

    • Hands-on experience with Dagster, dbt, Snowflake and FastAPI;
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated);
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.
       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred. A Master's degree or higher is a distinct advantage.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; 
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 162 views Β· 34 applications Β· 5d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    We are looking for you! As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures,...

    We are looking for you!

    As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures, optimizing data pipelines, and enabling intelligent decision-making at scale, we’d love to have you join our global team and shape the next generation of data excellence with us.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 3+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python and data stack (NumPy, Pandas, scikit-learn);
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

    Nice to have:

    • Hands-on experience with Python web stack (Fast API / Flask);
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated).
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.

       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; and
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 372 views Β· 33 applications Β· 5d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· English - B2
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 30 views Β· 3 applications Β· 5d

    Performance Engineer (Data Platform / Databricks)

    Full Remote Β· EU Β· Product Β· 3 years of experience Β· English - B2
    We are looking for a specialist to design and implement an end-to-end performance testing framework for a healthcare system running on Databricks and Microsoft Azure. You will build a repeatable, automated approach to measure and improve performance...

    We are looking for a specialist to design and implement an end-to-end performance testing framework for a healthcare system running on Databricks and Microsoft Azure. You will build a repeatable, automated approach to measure and improve performance across data ingestion, ETL/ELT pipelines, Spark workloads, serving layers, APIs, security/identity flows, integration components, and presentation/UI, while meeting healthcare-grade security and compliance expectations.

    This role sits at the intersection of performance engineering, cloud architecture, and test automation, with strong attention to regulated-domain requirements (privacy, auditability, access controls).

    Key Responsibilities

    • Design and build a performance testing strategy and framework for a Databricks + Azure healthcare platform.
    • Define performance KPIs/SLOs (e.g., pipeline latency, throughput, job duration, cluster utilization, cost per run, data freshness).
    • Create workload models that reflect production usage (batch, streaming, peak loads, concurrency, backfills).
    • Create a test taxonomy: smoke perf, baseline benchmarks, load, stress, soak/endurance, spike tests, and capacity planning.

       
    • Implement automated performance test suites for:
      • Databricks jobs/workflows (Workflows, Jobs API)
      • Spark/Delta Lake operations (reads/writes, mergers, compaction, Z-Ordering where relevant)
      • Data ingestion (ADF, Event Hubs, ADLS Gen2, Autoloader, etc. as applicable)
    • Build test data generation and data anonymization/synthetic data approaches suitable for healthcare contexts.
    • Instrument, collect, and analyze metrics from:
      • Spark UI / event logs
      • Databricks metrics and system tables
      • Azure Monitor / Log Analytics
      • Application logs and telemetry (if applicable)
    • Produce actionable performance reports and dashboards (trend, regression detection, run-to-run comparability).
    • Create performance tests for key user journeys (page load, search, dashboards) using appropriate tooling.
    • Measure client-side and network timings and correlate them with API/backend performance.
    • Integrate performance tests into CI/CD (Azure DevOps or GitHub Actions), including gating rules and baselines.
    • Document framework usage, standards, and provide enablement to engineering teams.

    Required Qualifications

    • Proven experience building performance testing frameworks (not just executing tests), ideally for data platforms.
    • Strong hands-on expertise with Databricks and Apache Spark performance tuning and troubleshooting.
    • Strong knowledge of Azure services used in data platforms (commonly ADLS Gen2, ADF, Key Vault, Azure Monitor/Log Analytics; others as relevant).
    • Strong programming/scripting ability in Python and/or Java/TypeScript.
    • Familiarity with load/performance tools and approaches (e.g., custom harnesses, Locust/JMeter/k6 where appropriate, or Spark-specific benchmarking).
    • Ability to design repeatable benchmarking (baseline creation, environment parity, noise reduction, statistical comparison).
    • Understanding of data security and compliance needs typical for healthcare (e.g., HIPAA-like controls, access management, auditability; adapt to your jurisdiction).
    • High-level proficiency in English

    Nice-to-Have / Preferred

    • Experience with Delta Lake optimization (OPTIMIZE, ZORDER, liquid clustering where applicable), streaming performance, and structured streaming.
    • Experience with Terraform/IaC for reproducible test environments.
    • Knowledge of Unity Catalog, data governance, and fine-grained access controls.
    • Experience with OpenTelemetry tracing and correlation across UI β†’ API β†’ data workloads.
    • FinOps mindset: performance improvements tied to cost efficiency on Databricks/Azure.
    • Prior work on regulated domains (healthcare, pharma, insurance).

     Working Model

    • Contract
    • Remote
    • Collaboration with Data Engineering, Platform Engineering, Security/Compliance, and Product teams.


     

    More
Log In or Sign Up to see all posted jobs