Jobs Data Engineer

143
  • Β· 42 views Β· 7 applications Β· 5d

    Data Engineer (Snowflake, dbt, Airflow) - Middle, Senior

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Short overview: Remote, full-time commitment, hourly payment, working mostly in the Kyiv time zone, but communication may stretch into EST for the calls. About the Project You will be joining a data-focused project centered around building and...

    Short overview: 

    Remote, full-time commitment, hourly payment, working mostly in the Kyiv time zone, but communication may stretch into EST for the calls. 


    About the Project

    You will be joining a data-focused project centered around building and maintaining a modern data platform. The project involves designing scalable data pipelines, developing a robust data warehouse, and enabling reliable analytics through well-structured data models.

    The work requires to have strong Python skills and includes hands-on development with Snowflake, dbt, and Apache Airflow. 

    Requirements:
    Β·    experience in data engineering, software engineering, or a related role.
    Β·    Strong proficiency in Python and SQL.
    Β·    Experience building and operating production-grade data pipelines.
    Β·    Proficiency in at least one additional language, such as Go or Java.
    Β·    Deep hands-on experience with Apache Airflow.
    Β·    Strong working knowledge of Snowflake.
    Β·    Expert-level experience with dbt (Core & Cloud).
    Β·    Strong experience with Kafka and streaming systems.
    Β·    Experience designing and maintaining REST APIs.
    Β·    Strong understanding of modern data architectures.
    Β·    Experience with medallion architecture and dimensional modeling.
    Β·    Experience implementing CI/CD pipelines for data workflows.
    Β·    Experience working in cloud environments, preferably AWS.

    Nice to Have
    Β·    Familiarity with Docker and Kubernetes.
    Β·    Experience with ClickHouse or other OLAP databases.
    Β·    Experience with Airbyte, Airbyte, or similar integration tools.
    Β·    Familiarity with data catalogs, lineage, or metadata management tools.
    Β·    Experience enabling self-service analytics.
     

    More
  • Β· 38 views Β· 10 applications Β· 4d

    Senior Data Engineer

    Worldwide Β· Product Β· 4 years of experience Β· English - C1
    How about building a high-load data architecture that handles millions of transactions daily? We’re looking for a Senior Data Engineer with growing to Data Lead. For design scalable pipelines from scratch. An international iGaming company & Data-first...

    How about building a high-load data architecture that handles millions of transactions daily?
    We’re looking for a Senior Data Engineer with growing to Data Lead.
    For design scalable pipelines from scratch.
    An international iGaming company & Data-first mindset,
    Remote, TOP-Salary

     

    Responsibilities

    – Build and run scalable pipelines (batch + streaming) that power gameplay, wallet, and promo analytics.

    – Model data for decisions (star schemas, marts) that Product, BI, and Finance use daily.

    – Make things reliable: tests, lineage, alerts, SLAs. Fewer surprises, faster fixes.

    – Optimize ETL/ELT for speed and cost (partitioning, clustering, late arrivals, idempotency).

    – Keep promo data clean and compliant (PII, GDPR, access controls).

    – Partner with POs and analysts on bets/wins/turnover KPIs, experiment readouts, and ROI.

    – Evaluate tools, migrate or deprecate with clear trade-offs and docs.

    – Handle prod issues without drama, then prevent the next one.

     

     

    Requirements

    – 4+ years building production data systems. You’ve shipped, broken, and fixed pipelines at scale.

    – SQL that sings and Python you’re proud of.

    – Real experience with OLAP and BI (Power BI / Tableau / Redash β€” impact > logo).

    – ETL/ELT orchestration (Airflow/Prefect or similar) and CI/CD for data.

    – Strong grasp of warehouses & lakes: incremental loads, SCDs, partitioning.

    – Data quality mindset: contracts, tests, lineage, monitoring.

    – Product sense: you care about player/clients impact, not just rows processed.

    ✨ Nice to Have (tell us if you’ve got it)

    – Kafka (or similar streaming), ClickHouse (we like it), dbt (modular ELT).

    – AWS data stack (S3, IAM, MSK/Glue/Lambda/Redshift) or equivalents.

    – Containers & orchestration (Docker/K8s), IaC (Terraform).

    – Familiarity with AI/ML data workflows (feature stores, reproducibility).

    – iGaming context: provider metrics bets / wins / turnover, regulated markets, promo events.

     

     

    We offer

    – Fully remote (EU-friendly time zones) or Bratislava/Malta/Cyprus if you like offices.

    – Unlimited vacation + paid sick leave.

    – Quarterly performance bonuses.

    – No micromanagement. Real ownership, real impact.

    – Budget for conferences and growth.

    – Product-led culture with sharp people who care.

     

     

    🧰 Our Day-to-Day Stack (representative)
    Python, SQL, Airflow/Prefect, Kafka, ClickHouse/OLAP DBs, AWS (S3 + friends), dbt, Redash/Tableau, Docker/K8s, GitHub Actions.

    More
  • Β· 50 views Β· 23 applications Β· 4d

    Python Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· English - B2
    Core Responsibilities β€’ Data Pipeline Management: Develop, optimize, and maintain scalable data pipelines to ensure high-quality data flow. β€’ API Development: Build and maintain high-performance backend APIs using FastAPI. β€’ System Reliability:...

    Core Responsibilities

     β€’ Data Pipeline Management: Develop, optimize, and maintain scalable data pipelines to ensure high-quality data flow. 

    β€’ API Development: Build and maintain high-performance backend APIs using FastAPI. 

    β€’ System Reliability: Proactively identify bottlenecks and improve system stability within existing infrastructures.

     β€’ Collaboration: Work closely with cross-functional teams to integrate AWS services and workflow orchestration tools into the production environment.

     

    Required Qualifications 

    β€’ Experience: 3+ years of professional Python development experience. 

    β€’ Databases: Strong proficiency in both SQL and NoSQL database design and management. 

    β€’ DevOps Tools: Hands-on experience with Docker, CI/CD pipelines, and Git version control. 

    β€’ Frameworks: Proven experience building applications with FastAPI.

    β€’ Cloud & Orchestration: Practical experience with AWS services and familiarity with Airflow (or similar workflow orchestration tools). 

    β€’ Communication: Upper-Intermediate level of English (written and spoken) for effective team collaboration. 

     

     Preferred Skills (Nice to Have) 

    β€’ Experience within the Financial Domain. 

    β€’ Hands-on experience with Apache Spark and complex ETL pipelines.

    β€’ Knowledge of container orchestration using Kubernetes. 

    β€’ Exposure to or interest in Large Language Models (LLMs) and AI integration.

    More
  • Β· 30 views Β· 5 applications Β· 4d

    Principal Analytics Developer

    Full Remote Β· EU Β· 5 years of experience Β· English - B2
    The Principal Analytics Developer is a new role that will support the newly created Product Data Domain teams. The role requires strong skills in dimensionally modelling, conforming and integrating data from multiple sources, as well as experience in...

    The Principal Analytics Developer is a new role that will support the newly created Product Data Domain teams. The role requires strong skills in dimensionally modelling, conforming and integrating data from multiple sources, as well as experience in leading strong analytics engineering teams.
    Responsibilities:
     

    • Planning workloads and delegating tasks in agile environment
    • Assisting with the daily operation of the organisation, including support and incidents
    • Able to provide feedback to team members, including constructive areas for development
    • Leading on the design, implementation and maintenance of dimensional data models that promote a self-service approach to data consumption. This includes ensuring that data quality within the data warehouse is maintained throughout the data lifecycle.
    • Define best practices in dimensional data modelling and database design and ensure standards are adhered to across the team.
    • Mentoring, coaching and supporting other team members in developing data modelling skills through knowledge transfer.
    • Automating data pipelines using proprietary technology & Airflow.
    • Using your expert knowledge of the company products and their features to inform the design and development of data products and upskilling the team through this knowledge.
    • Developing ways of working between product data domains and other data teams within product group.
    • The creation of processes for data product development, ensuring these processes are documented and advocating their use throughout the organisation.
    • Supporting analytics, data science and other colleagues outside the digital product area in managing projects and fielding queries.
    • Ability to build and maintain strong working relationships where you might, as a specialist, have to manage the expectations of more senior colleagues.
    • Working across mobile, web, television and voice platforms supporting Product Managers, Business Analysts and working closely with Software & Data Engineers.

       

    Requirements:
     

    • Extensive (5+ years) experience in managing teams building data warehouses / analytics from a diverse set of data sources (including event streams, various forms of batch processing)
    • At least 5 years’ experience in a Data Analyst, Data Modelling, Data Engineering or Analytics Engineering role, preferably in digital products, with an interest in data modelling and ETL processes
    • Proven experience in dimensionally modelling complex data at the conceptual, logical and physical layer.
    • Experience of designing STAR Schemas
    • Excellent SQL skills for extracting and manipulating data. Experience of using tools such as DBT, Looker and Airflow would be an advantage.
    • Good knowledge of analytical database systems (Redshift, Snowflake, BigQuery).
    • Comfortable working alongside cross-functional teams interacting with Product Managers, Engineers, Data Scientists, and Analysts.
    • Knowledge of digital products and their components, as well as what metrics affect their performance.
    • An understanding of how digital products use experimentation.
    • Some experience coding in R or Python.
    • A good understanding of on-demand audio and video media products, with a knowledge of key competitors.
       

    Will be a plus:

     

    • Ability to listen to others’ ideas and build on them 
    • Ability to clearly communicate to both technical and non-technical audiences.
    • Ability to collaborate effectively, working alongside other team members towards the team’s goals, and enabling others to succeed, where possible.
    • Ability to prioritise. A structured approach and ability to bring other on the journey. 
    • Strong attention to detail
       
    More
  • Β· 18 views Β· 0 applications Β· 3d

    Infrastructure Developer (Π‘++), Vinnytsia HUB, Ukraine

    Hybrid Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    An engineering and technology company that creates cutting-edge robotic, autonomous, and mission-critical systems used in real-world conditions around the world. Teams work on complex hardware and software solutions, from system architecture and...

    An engineering and technology company that creates cutting-edge robotic, autonomous, and mission-critical systems used in real-world conditions around the world. Teams work on complex hardware and software solutions, from system architecture and electronics to high-performance real-time software.
     

    The company's employees work in international engineering hubs, where local talent interacts with teams and partners from different countries, sees the direct impact of their work, and participates in global projects. This opens up opportunities for professional growth, development of expertise in robotics and autonomous systems, and participation in the creation of innovative solutions that shape the future of high-tech industries

    We are looking for an Infrastructure Developer to take ownership of the core system infrastructure that ensures reliable, low-latency, real-time operation. You will work with Linux, embedded platforms, and video systems, collaborating with backend, frontend, and hardware teams to maintain system stability, performance, and scalability throughout the full software lifecycle. This is a unique opportunity to work on complex, real-world systems at the intersection of robotics, autonomy, and high-performance software engineering.


    KEY RESPONSIBILITIES
    β€’ Develop, maintain, and optimize infrastructure and low-level components for embedded systems.
    β€’ Develop and maintain video pipelines for real-time and low-latency systems.
    β€’ Build, customize, and maintain Linux kernels and BSPs.
    β€’ Develop and maintain Docker-based build and deployment environments for embedded systems.
    β€’ Optimize system performance, latency, reliability, and resource usage.
    β€’ Debug, profile, and maintain complex production and embedded systems.
    β€’ Conduct code reviews and ensure high code quality and adherence to best practices.
    β€’ Collaborate with cross-disciplinary teams to deliver robust system solutions.

    BASIC QUALIFICATIONS

    β€’ At least 5 years of hands-on C++ development experience.
    β€’ Strong experience working in Linux-based environments.
    β€’ Experience with Docker and containerized deployments.
    β€’ Experience with real-time or low-latency systems.
    β€’ Strong debugging, profiling, and performance optimization skills.
    β€’ Experience with Git and modern development tools.
    β€’ Ability to work independently and take ownership of infrastructure components.

    PREFERRED SKILLS AND EXPERIENCE

    β€’ Experience with video streaming protocols (e.g., RTP, RTSP, WebRTC).
    β€’ Experience with Gstreamer.
    β€’ Familiarity with GPU / hardware-accelerated video pipelines.
    β€’ Background in robotics or autonomous systems.
    β€’ Experience with mission-critical or safety-critical environments.

    what we offer
    β€’ Experience in a fast-growing, highly innovative global industry.
    β€’ Excellent work conditions and open-minded team.
    β€’ Corporate events, regular internal activities and other benefits.
    β€’ Professional development opportunities and training.

    More
  • Β· 62 views Β· 21 applications Β· 3d

    Data Engineer / Data Architect

    Full Remote Β· Worldwide Β· Product Β· 4 years of experience Β· English - B2
    We are a leading sea moss superfood brand in the U.S., on a mission to redefine natural wellness by making it simple, accessible, and affordable for everyone, everywhere. As we continue to scale across multiple marketplaces and marketing channels, data...

    We are a leading sea moss superfood brand in the U.S., on a mission to redefine natural wellness by making it simple, accessible, and affordable for everyone, everywhere.
    As we continue to scale across multiple marketplaces and marketing channels, data has become our backbone. We are now looking for a Data Engineer / Data Architect who will take ownership of building a fast, reliable, and scalable data infrastructure for our e-commerce, marketing, and finance teams.

     

    Why join us?

    You’ll be building the core data infrastructure for a fast-growing, multi-channel wellness brand. This role has real ownership, high impact, and direct influence on how leadership makes decisions across marketing, finance, and operations.

     

    What we’re looking for:

    You’re a builder at heart. You’ve worked with e-commerce data before and understand how marketing platforms, stores, subscriptions, and marketplaces connect behind the scenes.
     

    You are:
    ● Experienced in e-commerce data β€” You’ve built or maintained data systems for online brands before.
    ● System-oriented β€” You think in pipelines, schemas, and automation.
    ● Fast and pragmatic β€” You can move quickly and deliver working solutions, not just plans.
    ● Detail-driven β€” You care about data accuracy, consistency, and definitions.
    ● Independent β€” You can take ownership of the data stack end to end.
    ● Collaborative β€” You work closely with marketing and finance teams to understand real needs.

     

    What you’ll do:

    Data Collection & Integration
    ● Pull data via APIs from: Google Ads, Meta, Shopify, Recharge, Klaviyo, Postscript,
    Amazon, TikTok, Walmart, and other platforms.
    ● Build reliable pipelines to collect and sync data continuously.

    Data Infrastructure & Storage

    ● Store and manage data in AWS (S3, Redshift) and/or BigQuery.
    ● Design scalable data models and schemas for marketing and finance use cases.
    ● Ensure data is clean, matched, transformed, and ready for analysis.
    Transformation & Automation
    ● Transform raw data based on business requirements (metrics, attribution, matching).
    ● Ensure automatic updates and stable refresh schedules.
    ● Optimize performance to eliminate delays and bottlenecks.
    Reporting & Accessibility
    ● Make data available via Google Sheets, data warehouses, and BI tools.
    ● Enable seamless access for Power BI (and other BI tools if needed).
    ● Support leadership dashboards and recurring reports.
     

    Key Technical Skills
    ● AWS: S3, Redshift, Airflow, Data Pipelines
    ● SQL
    ● Python
    ● Experience with API integrations
    ● BI tools: Power BI, Tableau (nice to have)

     

    What we offer:

    ● Welcome Pack and custom True Sea Moss merch to ensure you arrive like you were always meant to be here
    ● Sports reimbursement to support your physical and mental health
    ● Coaching & career consultations to support your personal and professional growth
    ● Access to corporate English lessons to sharpen your communication skills
    ● WHOOP membership to help you track your health, sleep, and recovery
    ● Coworking membership if you prefer a hybrid work lifestyle
    ● Sabbatical options after long-term contributions
    ● Project grants for side ideas, personal initiatives, or creative experiments
    If you’re ready to build clean, scalable data systems that power a real, fast-growing wellness brand β€” we’d love to hear from you.

    More
  • Β· 38 views Β· 2 applications Β· 3d

    Junior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 2 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ In-depth knowledge of Snowflake's data warehousing capabilities.
      β€’ Understanding of Snowflake's virtual warehouse architecture and how to optimize performance
      and cost.
      β€’ Proficiency in using Snowflake's data sharing and integration features for seamless collaboration.
      β€’ Develop and optimize complex SQL scripts, stored procedures, and data transformations.
      β€’ Work closely with data analysts, architects, and business teams to understand requirements and
      deliver reliable data solutions.
      β€’ Implement and maintain data models, dimensional modeling for data warehousing, data marts,
      and star/snowflake schemas to support reporting and analytics.
      β€’ Integrate data from various sources including APIs, flat files, relational databases, and cloud
      services.
      β€’ Ensure data quality, data governance, and compliance standards are met.
      β€’ Monitor and troubleshoot performance issues, errors, and pipeline failures in Snowflake and
      associated tools.
      β€’ Participate in code reviews, testing, and deployment of data solutions in development and production environments.

    • Mandatory Skills Description:

      β€’ 2+ years of experience
      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Ability to write complex SQL queries, stored procedures, and user-defined functions.
      β€’ Skills in optimizing SQL queries for performance and efficiency.
      β€’ Experience with ETL/ELT tools and techniques, including Snowpipe, AWS Glue, openflow, fivetran
      or similar tools for real-time and periodic data processing.
      β€’ Proficiency in transforming data within Snowflake using SQL, with Python being a plus.
      β€’ Strong understanding of data security, compliance and governance.
      β€’ Experience with DBT for database object modeling and provisioning.
      β€’ Experience in version control tools, particularly Azure DevOps.
      β€’ Good documentation and coaching practice.

    More
  • Β· 15 views Β· 1 application Β· 3d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ In-depth knowledge of Snowflake's data warehousing capabilities.
      β€’ Understanding of Snowflake's virtual warehouse architecture and how to optimize performance
      and cost.
      β€’ Proficiency in using Snowflake's data sharing and integration features for seamless collaboration.
      β€’ Develop and optimize complex SQL scripts, stored procedures, and data transformations.
      β€’ Work closely with data analysts, architects, and business teams to understand requirements and
      deliver reliable data solutions.
      β€’ Implement and maintain data models, dimensional modeling for data warehousing, data marts,
      and star/snowflake schemas to support reporting and analytics.
      β€’ Integrate data from various sources including APIs, flat files, relational databases, and cloud
      services.
      β€’ Ensure data quality, data governance, and compliance standards are met.
      β€’ Monitor and troubleshoot performance issues, errors, and pipeline failures in Snowflake and
      associated tools.
      β€’ Participate in code reviews, testing, and deployment of data solutions in development and production environments.

    • Mandatory Skills Description:

      β€’ 5+ years of experience
      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Ability to write complex SQL queries, stored procedures, and user-defined functions.
      β€’ Skills in optimizing SQL queries for performance and efficiency.
      β€’ Experience with ETL/ELT tools and techniques, including Snowpipe, AWS Glue, openflow, fivetran
      or similar tools for real-time and periodic data processing.
      β€’ Proficiency in transforming data within Snowflake using SQL, with Python being a plus.
      β€’ Strong understanding of data security, compliance and governance.
      β€’ Experience with DBT for database object modeling and provisioning.
      β€’ Experience in version control tools, particularly Azure DevOps.
      β€’ Good documentation and coaching practice.

    More
  • Β· 29 views Β· 8 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Design and run high-throughput, production-grade data pipelines. Own data correctness, latency, and availability end to end. Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch. Design for change - schema...

    🎯 What You’ll Actually Do

    • Design and run high-throughput, production-grade data pipelines.
    • Own data correctness, latency, and availability end to end.
    • Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch.
    • Design for change - schema evolution, reprocessing, and new consumers.
    • Protect BI, Product, and Ops from breaking changes and silent data issues.
    • Build monitoring, alerts, and data quality checks that catch problems early.
    • Work side-by-side with Product, BI, and Engineering β€” no handoffs, shared ownership.
    • Step into incidents, own RCA, and make sure the same class of failure never repeats.

    This is a hands-on senior IC role with real accountability.

     

     

    🧠 What You Bring (Non-Negotiable)

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with analytical databases
      (ClickHouse, Snowflake, BigQuery, or similar).
    • Experience with event-driven or streaming systems
      (Kafka, CDC, pub/sub).
    • Solid understanding of:
      • at-least-once vs exactly-once semantics
      • schema evolution & backfills
      • mutation and reprocessing costs
    • Strong SQL and at least one programming language
      (Python, Java, Scala, etc.).
    • You don’t just ship - you own what happens after.

       

    πŸ”§ How We Work

    • Reliability > cleverness.
    • Ownership > process.
    • Impact > output.
    • Direct > polite.
    • One team, one system.

       

    πŸ”₯ What We Offer

    • Fully remote (Europe).
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • No bureaucracy. Real data problems.

       

    πŸ‘‰ Apply if you treat data like production software - and feel uncomfortable when numbers can’t be trusted.

    More
  • Β· 28 views Β· 4 applications Β· 3d

    Senior Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Architect and run high-load, production-grade data pipelines where correctness and latency matter. Design systems that survive schema changes, reprocessing, and partial failures. Own data availability, freshness, and trust - not...

    🎯 What You’ll Actually Do

    • Architect and run high-load, production-grade data pipelines where correctness and latency matter.
    • Design systems that survive schema changes, reprocessing, and partial failures.
    • Own data availability, freshness, and trust - not just pipeline success.
    • Make hard calls: accuracy vs cost, speed vs consistency, rebuild vs patch.
    • Build guardrails so downstream consumers (Analysts, Product, Ops) don’t break.
    • Improve observability: monitoring, alerts, data quality checks, SLAs.
    • Partner closely with backend engineers, data analysts, and Product - no handoffs, shared ownership.
    • Debug incidents, own RCA, and make sure the same class of failure doesn’t return.

    This is a hands-on IC role with platform-level responsibility.

     

    🧠 What You Bring

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with columnar analytical databases (ClickHouse, Snowflake, BigQuery, similar).
    • Experience with event-driven / streaming systems (Kafka, pub/sub, CDC, etc.).
    • Strong SQL + at least one general-purpose language (Python, Java, Scala).
    • You think in failure modes, not happy paths.
    • You explain why something works - and when it shouldn’t be used.

    Bonus: You’ve rebuilt or fixed a data system that failed in production.

     

    πŸ”§ How We Work

    • Reliability > elegance. Correct data beats clever data.
    • Ownership > tickets. You run what you build.
    • Trade-offs > dogma. Context matters.
    • Direct > polite. We fix problems, not dance around them.
    • One team, one system. No silos.
    •  

    πŸ”₯ What We Offer

    • Fully remote.
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • Zero bureaucracy. Real engineering problems.

       

    πŸ‘‰ Apply if you see data platforms as systems to be engineered - not pipelines to babysit.

    More
  • Β· 36 views Β· 1 application Β· 3d

    Senior Data Engineer

    Full Remote Β· EU Β· 6 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Senior Data Engineer who will be responsible for developing, enhancing, and maintaining enterprise data warehouse, data platform, and analytical data flows. The role supports all company’s subsidiaries and contributes to creating maximum value from data for internal stakeholders. 

    The qualified candidate will work as part of the Data Engineering team and will handle complex 3rd-line issues, long-term improvements, and new data development. The work will be aligned with the team’s structured 3-week planning cycles, and tight collaboration with the on-site Team Lead is expected. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING EXCITING CHALLENGES 

    • Develop and maintain the enterprise data warehouse, data marts, staging layers, and transformation logic 
    • Design, implement, and optimize ETL/ELT pipelines (SQL Server, Azure data components, Databricks, etc.) 
    • Build and maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets) 
    • Develop and improve the BI environment and the underlying data processes used by analysts across the company 
    • Implement processes for controlled, reliable data delivery to BI specialists, analysts, and modelling teams (e.g., forecasting, scenario modelling) 
    • Support data quality frameworks and implement testing/validation procedures 
    • Investigate and resolve escalated 3rd-line operational issues and guide 2nd-line support in root cause analysis 
    • Conduct stakeholder workshops to understand business requirements and translate them into technical data solutions 
    • Identify opportunities to improve data usability, analytical value, and process automation 
    • Document data processes, models, pipelines, and architectural decisions in Confluence 
    • Collaborate with the on-site Team Lead during sprint planning, backlog refinement, and prioritization. 

     

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 6+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL development skills, preferably in MS SQL Server 
    • Experience building and maintaining ETL/ELT workflows using: 
    • Databricks 
    • Azure Data Factory or similar cloud-based data orchestration tools 
    • Azure data platform services (e.g., storage, compute, data lake formats) 
    • Solid understanding of data warehouse architectures and dimensional modelling 
    • Experience with data quality checks, validation frameworks, and monitoring 
    • Understanding of BI concepts and ability to prepare user-friendly analytical datasets 
    • Experience collaborating with business stakeholders and capturing analytical or operational data requirements 
    • Strong communication skills and the ability to explain data concepts clearly 
    • Willingness to document solutions and share knowledge within the team 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 
    • Experience with performance tuning for SQL or data pipelines 
    • Interest in visual clarity, usability of data models, and BI-driven design 

     

     

     WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
  • Β· 45 views Β· 4 applications Β· 3d

    Data Engineer_support

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Data Engineer with strong technical troubleshooting skills to be responsible for monitoring, investigating, and resolving operational issues related to data warehouse and data pipelines. The qualified candidate will work as part of the Data Engineering team and will handle incoming 2nd-line support tickets (primarily task failures, timeouts, execution errors, and data inconsistencies in scheduled processes). 

    The role ensures that daily operational data flows run reliably and that incidents are triaged and resolved efficiently. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING CHALLENGES 

    • Troubleshooting failed scheduled tasks (e.g., ETL pipelines that time out, fail on specific datasets, or produce partial/incomplete outputs) 
    • Investigating recurring timeout issues in ETL jobs (e.g., exMon timeout while running data extraction from in-house systems) 
    • Resolving warnings raised by the monitoring system (exMon) 
    • Identifying and escalating data quality inconsistencies (e.g., discrepancies in RG41 SCADA data, mismatches in business-critical tables) 
    • Running or re-running failed jobs when appropriate 
    • Correcting configuration issues in pipeline parameters, schedule triggers, or source/target connections 
    • Cooperating closely with on-site team (status meeting, sprint planning, etc) 
    • Collaborating closely with the Data Engineering Team Lead for priorities and escalations 
    • Updating Jira tickets in English with clear problem descriptions and resolutions 
    • Gradually take on more data engineering tasks (beside support) 

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 3+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL debugging skills (preferably MS SQL Server) 
    • Experience with ETL / ELT workflows (SSIS, ADF, custom pipelines, or similar) 
    • Familiarity with data warehouse concepts (fact tables, dimensions, staging layers) 
    • Ability to parse log outputs, identify root causes, and correct configuration or code-level issues in data jobs 
    • Experience with job scheduling/monitoring systems (e.g., exMon or equivalents) 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 

     

      WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
  • Β· 51 views Β· 4 applications Β· 3d

    Senior Data Engineer (for Ukrainians in EU)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About our Customer It's a European company turning bold ideas into reality. We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech,...

    About our Customer
    It's a European company turning bold ideas into reality.  We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech, FinTech, ClimateTech, SocialTech, PropTech , and more.
     

    Founded in Ukraine with a Scandinavian-inspired culture.
     

    We value skills, passion, excellence, equality, openness, mutual respect, and trust. You’ll join a growing company, work with creative, inspiring colleagues, explore cutting-edge technologies, and build AI-driven solutions that make a real impact.
     

    Project
    Our client is an Icelandic energy company  providing electricity, geothermal water, cold water, carbon storage, and optic networks.
     

    We are looking for a Senior Data Engineer ready to dive deep into data, solve challenging problems, and create maximum value for internal stakeholders. You’ll handle complex issues, design long-term improvements, and develop new data pipelines as part of an enthusiastic and collaborative Data Engineering team.
     

    Tech Stack:
    πŸ—„οΈ MS SQL Server | Azure/Databricks | Power BI, Tableau | Microsoft BI stack (SSRS, SSIS, SSAS) | TimeXtender | exMon
     

    Responsibilities:

    • Develop & maintain enterprise data warehouse, data marts, staging layers, and transformation logic
    • Design, implement & optimize ETL/ELT pipelines (SQL Server, Azure, Databricks)
    • Build & maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets)
    • Improve BI environment and ensure data is reliable and actionable
    • Implement controlled data delivery processes to analysts & BI specialists
    • Support data quality frameworks, testing & validation procedures
    • Investigate 3rd-line operational issues & guide 2nd-line support
    • Run stakeholder workshops to translate business needs into elegant technical solutions
    • Identify opportunities to improve data usability, value, and automation
    • Document all processes, models, and pipelines in Confluence
    • Collaborate with on-site Team Lead for sprint planning, backlog refinement, and prioritization
       

    Requirements

    • Bachelor’s or Master’s in Computer Science or related field
    • 6+ years of experience with DWH solutions & data pipelines
    • Strong SQL development skills (MS SQL Server preferred)
    • ETL/ELT workflow experience using:
      • Databricks
      • Azure Data Factory / cloud orchestration tools
      • Azure data platform services (storage, compute, data lake)
    • Solid understanding of data warehouse architectures & dimensional modeling
    • Experience with data quality checks, validation, and monitoring
    • Understanding of BI concepts & ability to prepare user-friendly datasets
    • Strong communication, able to explain data concepts to stakeholders
    • Willingness to document solutions and share knowledge
    • Experience in distributed, cross-cultural Agile environments
    • English: upper-intermediate / advanced


    πŸ”Ή Bonus / Nice to Have

    • Python or similar for data processing
    • Performance tuning for SQL or data pipelines
    • Interest in visual clarity & usability of data models
    More
  • Β· 31 views Β· 1 application Β· 2d

    Data Engineer to $4300

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· досвідом близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– вСбзастосунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ довгостроковій ΠΌΠΎΠ΄Π΅Π»Ρ–...

    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· Π΄ΠΎΡΠ²Ρ–Π΄ΠΎΠΌ близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– Π²Π΅Π±Π·Π°ΡΡ‚осунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ Π΄ΠΎΠ²Π³ΠΎΡΡ‚Ρ€ΠΎΠΊΠΎΠ²Ρ–ΠΉ ΠΌΠΎΠ΄Π΅Π»Ρ– співпраці.

    Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ ΠΏΠ΅Ρ€Π΅Π²Π°ΠΆΠ½ΠΎ Π· ΠΏΡ€ΠΎΡ”ΠΊΡ‚Π°ΠΌΠΈ Π² Π΄ΠΎΠΌΠ΅Π½Π°Ρ… логістики (Π²ΠΊΠ»ΡŽΡ‡Π½ΠΎ Π· last mile),e-commerce, fintech Ρ‚Π° Π±Π°Π½ΠΊΡ–Π½Π³Ρƒ, Π° Ρ‚Π°ΠΊΠΎΠΆ enterprise-Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ.
    Для нас Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ, Ρ‰ΠΎΠ± ΠΏΡ€ΠΎΡ”ΠΊΡ‚ Π±ΡƒΠ² β€œΡ‡ΠΈΡΡ‚ΠΈΠΌβ€ Ρ– Π·Ρ€ΠΎΠ·ΡƒΠΌΡ–Π»ΠΈΠΌ Π· Ρ‚ΠΎΡ‡ΠΊΠΈ Π·ΠΎΡ€Ρƒ Π΅Ρ‚ΠΈΠΊΠΈ Ρ‚Π° Ρ†Ρ–нності для користувачів.

    Ми ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΎΠ²ΠΎ Π½Π΅ Π±Π΅Ρ€Π΅ΠΌΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚ΠΈ, пов’язані Π·:
    ● gambling / Π³Π΅ΠΌΠ±Π»Ρ–Π½Π³ΠΎΠΌ,
    ● adult-ΠΊΠΎΠ½Ρ‚Π΅Π½Ρ‚ΠΎΠΌ Ρ‚Π° ΠΏΠΎΡ€Π½ΠΎΠ³Ρ€Π°Ρ„Ρ–Ρ”ΡŽ,
    ● ΡˆΠ°Ρ…Ρ€Π°ΠΉΡΡ‚Π²ΠΎΠΌ Π°Π±ΠΎ Π±ΡƒΠ΄ΡŒ-якою Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ, Ρ‰ΠΎ ΡΠΏΡ€ΡΠΌΠΎΠ²Π°Π½Π° Π½Π° ΠΎΠ±ΠΌΠ°Π½ Ρ‡ΠΈ ΠΌΠ°Π½Ρ–пуляції.

    What We Offer:
    ● Fully remote work
    ● Long-term, stable project
    ● High level of autonomy and trust
    ● Minimal bureaucracy
    ● Direct impact on business-critical logistics systems
    ● Long-term engagement, not a short-term contract.

    Project Overview:
    The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking, helping owners, property managers, and brokers make informed, data-driven decisions.

    Requirements (Must-have):
    - Strong English communication skills (B2)

    - PowerBI skills:
    β€’ Able to understand the data sources and relevant data for analysis
    β€’ Design and refine data models, familiarity with a dimensional model
    β€’ Develop interactive reports and dashboards
    β€’ Knowledge of DAX

    - Azure and DB skills:
    β€’ Proficiency in ETL/ELT design, development and support
    β€’ Strong hands-on experience in Azure Data Factory
    β€’ Experience in Azure Functions
    β€’ Stored Procedures writing and optimization
    β€’ Telerik .NET Reporting experience (Nice to have)

    Responsibilities:
    Continue improving existing data reporting tools. List of existing integrations (where data comes from):
    - Procore
    - DealPath
    - Yardi
    - MRI
    - JDE
    - VTS
    - OneSite
    - CoStar
    - Argus
    - Salesforce
    - RealPage

    Nice to Have:
    - Basic Python skills

    Required: 
    The client is in the PST timezone and is available for communication from 5-6pm UA time. Specialist should be available until 7 pm UA time.

    Hiring Process:
    - Intro call
    - Technical discussion (focused on real experience)
    - Offer
    Start: ASAP

    More
  • Β· 4 views Β· 0 applications Β· 2d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· English - B2
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
Log In or Sign Up to see all posted jobs