Jobs

101
  • · 104 views · 2 applications · 18d

    Middle/Senior Data Engineer (3445)

    Full Remote · Ukraine · 3 years of experience · Intermediate
    General information: We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table,...

    General information:
    We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table, some specific requirements can be flexible — we’re more interested in finding "our person" :)
     

    Responsibilities:
    Implementation of business logic in Data Warehouse according with the specifications
    Some business analysis required to enable providing the relevant data in a relevant manner
    Conversion of business requirements into data models
    Pipelines management (ETL pipelines in Datafactory)
    Loadings and query performance tuning
    Working with senior staff on the customer's side who will provide requirements while engineer may propose some own ideas
     

    Requirements:
    Experience with Azure and readiness to work (up to 80% of time) with SQL is a must
    Development of data base systems (MS-SQL/T-SQL,SQL)
    Writing well performing SQL code and investigating & implementing performance measures
    Data warehousing / dimensional modeling
    Working within an Agile project setup
    Creation and maintenance of Azure DevOps & Data Factory pipelines
    Developing robust data pipelines with DBT

    Experience with Databricks (optional)
    Work in Supply Chain & Logistics and aware of SAP MM Data structure (optional).

     

    More
  • · 45 views · 5 applications · 18d

    Senior Data Platform Engineer

    Full Remote · Countries of Europe or Ukraine · Product · 8 years of experience · Upper-Intermediate
    Position Summary: We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are...

    Position Summary: 

    We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are flexible with considering applications from anywhere in Europe. 
     
    Duties and responsibilities: 

    • Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within Crystal's platform; 
    • Active participation in development and maintenance of our data pipelines and backend services; 
    • Integrate new technologies into our processes and tools; 
    • End-to-end feature designing and implementation; 
    • Code, debug, test and deliver features and improvements in a continuous manner; 
    • Provide code review, assistance and feedback for other team members. 


    Required: 

    • 8+ years of experience developing Python backend services and APIs; 
    • Advanced knowledge of SQL - ability to write, understand and debug complex queries; 
    • Data Warehousing and database basic architecture principles; 
    • POSIX/Unix/Linux ecosystem knowledge; 
    • Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI; 
    • Knowledge about blockchain technologies or willingness to learn; 
    • Experience with PostgreSQL database system; 
    • Knowledge of Unit Testing principles; 
    • Experience with Docker containers and proven ability to migrate existing services; 
    • Independent and autonomous way of working; 
    • Team-oriented work and good communication skills are an asset. 


    Would be a plus: 

    • Practical experience in big data and frameworks – Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse; 
    • Knowledge of Kubernetes and Infrastructure as Code – Terraform and Ansible; 
    • Passion for Bitcoin and Blockchain technologies; 
    • Experience with distributed systems; 
    • Experience with opensource solutions; 
    • Experience with Java or willingness to learn. 
    More
  • · 39 views · 0 applications · 17d

    Middle BI/DB Developer

    Office Work · Ukraine (Lviv) · Product · 2 years of experience · Upper-Intermediate
    About us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...

    About us:

    EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.

    But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.

    EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.

    Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
     

    We are looking for a passionate and dedicated Junior QA to join our team in Lviv!

    About the unit:

    DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.

    What You'll get to do:

    • Develop real time data processing and aggregations
    • Create and modify data marts (enhance our data warehouse)
    • Take care of internal and external integrations
    • Forge various types of reports

    Our main stack:

    • DB: BigQuery, PostgreSQL
    • ETL: Apache Airflow, Apache NiFi
    • Streaming: Apache Kafka

    What You need to know:

    Here's what we offer:

    • Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
    • Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
    • Support for New Parents:
    • 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
    • 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.

    Our office perks include on-site massages and frequent team-building activities in various locations.

    Benefits & Perks:

    • Daily catered lunch or monthly lunch allowance. 
    • Private Medical Subscription. 
    • Access online learning platforms like Udemy for Business, LinkedIn Learning or O’Reilly, and a budget for external training.
    • Gym allowance

    At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!

    More
  • · 46 views · 4 applications · 17d

    Data Engineer (PostgreSQL, Snowflake, Google BigQuery, MongoDB, Elasticsearch)

    Full Remote · Worldwide · 5 years of experience · Intermediate
    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not. Our data comes in all kinds of...

    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not.  Our data comes in all kinds of sizes, shapes and formats.  Traditional RDBMS like PostgreSQL, Oracle, SQL Server, MPPs like StarRocks, Vertica, Snowflake, Google BigQuery, and unstructured, key-value like MongoDB, Elasticsearch, to name a few.

     

    We are looking for individuals who can design and solve any data problems using different types of databases and technology supported within our team.  We use MPP databases to analyze billions of rows in seconds.  We use Spark and Iceberg, batch or streaming to process whatever the data needs are.  We also use Trino to connect all different types of data without moving them around. 

     

    Besides a competitive compensation package, you’ll be working with a great group of technologists interested in finding the right database to use, the right technology for the job in a culture that encourages innovation.  If you’re ready to step up and take on some new technical challenges at a well-respected company, this is a unique opportunity for you.

     

    Responsibilities:

    • Work within our on-prem Hadoop ecosystem to develop and maintain ETL jobs
    • Design and develop data projects against RDBMS such as PostgreSQL 
    • Implement ETL/ELT processes using various tools (Pentaho) or programming languages (Java, Python) at our disposal 
    • Analyze business requirements, design and implement required data models
    • Lead data architecture and engineering decision making/planning.
    • Translate complex technical subjects into terms that can be understood by both technical and non-technical audiences.

     

    Qualifications: (must have)

    • BA/BS in Computer Science or in related field
    • 5+ years of experience with RDBMS databases such as Oracle, MSSQL or PostgreSQL
    • 2+ years of experience managing or developing in the Hadoop ecosystem
    • Programming background with either Python, Scala, Java or C/C++
    • Experience with Spark. PySpark, SparkSQL, Spark Streaming, etc…
    • Strong in any of the Linux distributions, RHEL,CentOS or Fedora
    • Working knowledge of orchestration tools such Oozie and Airflow
    • Experience working in both OLAP and OLTP environments
    • Experience working on-prem, not just cloud environments
    • Experience working with teams outside of IT (i.e. Application Developers, Business Intelligence, Finance, Marketing, Sales)

     

    Desired: (nice to have)

    • Experience with Pentaho Data Integration or any ETL tools such as Talend, Informatica, DataStage or HOP.
    • Deep knowledge shell scripting, scheduling, and monitoring processes on Linux
    • Experience using reporting and Data Visualization platforms (Tableau, Pentaho BI)
    • Working knowledge of data unification and setup using Presto/Trino
    • Web analytics or Business Intelligence a plus
    • Understanding of Ad stack and data (Ad Servers, DSM, Programmatic, DMP, etc)
    More
  • · 62 views · 6 applications · 16d

    Associate Director, Analytics - Data Engineering to $4000

    Full Remote · Countries of Europe or Ukraine · 4 years of experience · Advanced/Fluent
    We are looking for someone who is experienced and familiar with the following tools: Strong programming skills in SQL, Python, R, or other programming languages Experience with SQL and NoSQL databases Knowledge of ETL tools such as Google BigQuery,...

     

    We are looking for someone who is experienced and familiar with the following tools:

    Strong programming skills in SQL, Python, R, or other programming languages

    Experience with SQL and NoSQL databases

    Knowledge of ETL tools such as Google BigQuery, Funnel.io, and Tableau Prep

    Business Intelligence (Tableau, Looker, Google Looker Studio, Power BI, Datorama, etc.)

    Analytics platforms UIs / APIs (Google Analytics, Adobe, etc.)

    Media reporting UIs / APIs (DV360, SA360, Meta, etc.)

    Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform is a plus
    What we’re looking for

    A Bachelor’s or Master’s degree in computer science, operations research, statistics, applied mathematics, management information systems, or a related quantitative field or equivalent work experience such as, economics, engineering and physics

    3+ years of experience with any combination of analytics, marketing analytics, and analytic techniques for marketing, customer, and business applications

    Hands-on experience using SQL, Google BigQuery, Python, R, and Google Cloud (GCP) related products

    Hands-on experience working with common ETL tools

    Hands-on experience using Tableau, Datorama, and other common visualisation tools

    Expertise across programmatic display, video, native, and ad serving technology, as well as digital advertising reporting, measurement, and attribution tools

    Adept to agile methodologies and well-versed in applying DataOps methods to the construction of pipelines and delivery

    Demonstrated ability to effectively operate both independently and in a team environment

    Responsibilities

    Design, develop and maintain complex data pipelines and systems to process large volumes of data

    Collaborate with cross-functional teams to gather requirements, identify data sources, and ensure data quality

    Architect data solutions that are scalable, reliable, and secure, and meet business requirements

    Develop and maintain data models, ETL processes, and data integration strategies

    Design and implement data governance policies and procedures to ensure data accuracy, consistency, and security

    Create visualisations and reports to communicate insights to stakeholders across multiple data streams

    Collaborate as part of a team to drive analyses and insights that lead to more informed decisions and improved business performance

    Work with other teams to ensure teamwork and timely delivery of client projects
     

    More
  • · 158 views · 8 applications · 16d

    Data Engineer, Data Analyst, Data Scientist

    Full Remote · Ukraine · Product · 1 year of experience · Upper-Intermediate
    Requirements: Practical experience with data processing and data analysis Strong knowledge of Python Ability to take part in business analysis is preferable Organized, responsible, and fast-learning person Responsibilities: Design and implement data...

    Requirements:

    • Practical experience with data processing and data analysis
    • Strong knowledge of Python
    • Ability to take part in business analysis is preferable
    • Organized, responsible, and fast-learning person

     

    Responsibilities:

    • Design and implement data automation solutions
    • Optimize data processing for efficiency and scalability.
    • Work closely with cross-functional team to understand data requirements, support business analysis 
    • Prompt engineering and business analysis
    • Класифікація даних та розробка рекомендаційної системи

       

    We Offer

    • WFH and flexible working hours
    More
  • · 87 views · 3 applications · 16d

    Data Engineer

    Part-time · Full Remote · Ukraine · 3 years of experience · Upper-Intermediate
    Reenbit is a people‑first software consultancy headquartered in Lviv. We help global clients accelerate delivery with small, senior‑only teams and a culture of trust, transparency, and continuous learning. As we expand our Google Cloud™ data practice,...

    Reenbit is a people‑first software consultancy headquartered in Lviv. We help global clients accelerate delivery with small, senior‑only teams and a culture of trust, transparency, and continuous learning. As we expand our Google Cloud™ data practice, we’re looking for a part‑time engineer who can jump into short engagements, deliver production‑ready solutions, and shape our internal expertise.
     

    Why this role?

    • Flexibility: 10‑20 hours per week on a schedule you set
    • Impact: Own discrete data pipelines and POCs that go live within weeks
    • Influence: Help craft Reenbit’s Google‑centric data standards, templates, and knowledge base
    • Growth: Access to funded certifications, internal tech talks, and mentorship opportunities
       

    What you’ll do

    • Design and implement data ingestion, transformation, and storage solutions on Google Cloud Platform (GCP)—primarily BigQuery, Cloud Storage, Dataflow / Apache Beam, and Pub/Sub.
    • Build repeatable CI/CD workflows (Cloud Build, Terraform, GitHub Actions) for data projects.
    • Optimize query performance and cost; guide teammates on partitioning, clustering, and access‑control best practices.
    • Collaborate with solution architects, BI developers, and DevOps to turn business questions into reliable datasets and dashboards.
    • Contribute to internal accelerators (Terraform modules, reference architectures, code snippets) that raise our delivery velocity.
    • Join discovery calls and estimations to advise on effort, risks, and alternative approaches for potential projects.
       

    What you bring

    • At least 2 years of commercial, client‑facing experience building data solutions on GCP.
    • Solid grasp of SQL, ELT/ETL patterns, streaming vs. batch processing, and data modeling.
    • Hands‑on work with BigQuery plus one or more of Cloud Composer (Airflow), Workflows, Dataflow/Beam, or Dataproc/Spark.
    • Proficiency in Python (or Java/Scala) for pipeline development and automation and Looker.
    • A DevOps mindset: infrastructure as code (Terraform or Pulumi), Git branching flows, automated testing, and CI/CD best practices.
    • Clear written and spoken English, proactive communication, and the ability to work independently within GMT +1 to +3 time zones.
       

    Nice to have: Google Professional Data Engineer or Cloud Architect certification; experience migrating workloads from AWS/Azure to GCP, AI workloads on top of BigQuery; familiarity with Power BI, Vertex AI Feature Store, or Data Plex; contributions to open‑source Beam, Airflow, or dbt projects.
     

    We offer:

    • Annual paid leave — 18 working days (100% compensation).
    • Plus 3 days of your vacation after childbirth and marriage.
    • Annual paid sick leave — 7 working days (100% compensation).
    • Opportunity to become a part of our professional team.
    • Opportunity to participate in various events (educational programs, seminars, training sessions).
    • English courses.
    • Healthcare program.
    • IT Cluster membership.
    • Free parking place.
    • Competitive compensation.
    More
  • · 81 views · 21 applications · 16d

    Data Engineer

    Full Remote · Worldwide · 5 years of experience · Upper-Intermediate
    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others. About project: Advanced...

    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.

    About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime

    • Requirements:
      • 6+ years of experience with Python backend development
        Solid knowledge of SQL (including writing/debugging complex queries)
        Understanding of data warehouse principles and backend architecture
      • Experience working in Linux/Unix environments
        Experience with APIs and Python frameworks (e.g., Flask, FastAPI)
      • Experience with PostgreSQL
      • Familiarity with Docker
      • Basic understanding of unit testing
      • Good communication skills and ability to work in a team
      • Interest in blockchain technology or willingness to learn
      • Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
      • Strong problem-solving skills and the ability to work independently
    • Responsibilities:
      • Integrate new blockchainsAMM protocols, and bridges into the our platform
      • Build and maintain data pipelines and backend services
      • Help implement new tools and technologies into the system
      • Participate in the full cycle of feature development – from design to release
      • Write clean and testable code
      • Collaborate with the team through code reviews and brainstorming
    • Nice to Have:
      • Experience with KafkaSpark, or ClickHouse
      • Knowledge of KubernetesTerraform, or Ansible
      • Interest in cryptoDeFi, or distributed systems
      • Experience with open-source tools
      • Some experience with Java or readiness to explore it
    • What we offer:
      • Remote working format 
      • Flexible working hours
      • Informal and friendly atmosphere
      • The ability to focus on your work: a lack of bureaucracy and micromanagement
      • 20 paid vacation days
      • 7 paid sick leaves
      • Education reimbursement
      • Free English classes
      • Psychologist consultations
    • Recruitment process:

      Recruitment Interview – Technical Interview

    More
  • · 31 views · 6 applications · 15d

    Lead Data Engineer (AWS)

    Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · Pre-Intermediate
    We are looking for an experienced Lead Data Engineer to manage a team responsible for building and maintaining data pipelines using AWS and Pentaho Data Integration (PDI). The role involves designing, implementing, and optimizing ETL processes in a...

    We are looking for an experienced Lead Data Engineer to manage a team responsible for building and maintaining data pipelines using AWS and Pentaho Data Integration (PDI). The role involves designing, implementing, and optimizing ETL processes in a distributed cloud environment.

     

    Key responsibilities:

    • Lead a team of data engineers: task planning, deadline tracking, code review
    • Design and evolve data flow architecture within the AWS ecosystem (S3, Glue, Redshift, Lambda, etc.)
    • Develop and support ETL processes using Pentaho Data Integration
    • Set up CI/CD pipelines for data workflows
    • Optimize performance, monitor and debug ETL processes
    • Collaborate closely with analytics, data science, and DevOps teams
    • Implement best practices for Data Governance and Data Quality

     

    Requirements:

    • 5+ years of experience in data engineering
    • Strong hands-on knowledge of AWS services: S3, Glue, Redshift, Lambda, CloudWatch, IAM
    • Proven experience with Pentaho Data Integration (Kettle): building complex transformations and jobs
    • Proficiency in SQL (preferably Redshift or PostgreSQL)
    • Experience with Git and CI/CD tools (e.g., Jenkins, GitLab CI)
    • Understanding of DWH, Data Lake, ETL/ELT concepts
    • Solid Python skills
    • Team leadership and project management experience

     

    Nice to have:

    • Experience with other ETL tools (e.g., Apache NiFi, Talend, Airflow)
    • Experience migrating on-premises solutions to the cloud (especially AWS)
    More
  • · 25 views · 3 applications · 15d

    Senior Data Engineer (IRC264689)

    Full Remote · Poland, Romania, Croatia, Slovakia · 5 years of experience · Upper-Intermediate
    Our client provides collaborative payment, invoice and document automation solutions to corporations, financial institutions and banks around the world. The company’s solutions are used to streamline, automate and manage processes involving payments,...

    Our client provides collaborative payment, invoice and document automation solutions to corporations, financial institutions and banks around the world. The company’s solutions are used to streamline, automate and manage processes involving payments, invoicing, global cash management, supply chain finance and transactional documents. Organizations trust these solutions to meet their needs for cost reduction, competitive differentiation and optimization of working capital.

    Serving industries such as financial services, insurance, health care, technology, communications, education, media, manufacturing and government, Bottomline provides products and services to approximately 80 of the Fortune 100 companies and 70 of the FTSE (Financial Times) 100 companies.

    Our client is a participating employer in the Employment Verification (E-Verify) program EOE/AA/M/F/V/D/E-Verify Employer

    Our client is an Equal Employment Opportunity and Affirmative Action Employer.

    As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set working alongside highly experienced and talented people.

    Don’t waste any second, apply!

     

    Skill Category

    Data Engineering

     

    We expect candidates with long experience to work with a new team and demonstrate experience on the following:

    • Experience with Databricks or similar
    • Hands-on experience with the Databricks platform or similar is helpful.
    • Managing delta tables, including tasks like incremental updates, compaction, and restoring versions
    • Proficiency in python (or other programming skills) and SQL, commonly used to create and manage data pipelines, query and run BI DWH workload on Databricks
    • Familiarity with other languages like Scala (common in the Spark/Databricks world), Java can also be beneficial.
    • Experience with Apache Spark
    • Understanding of Apache Spark’s architecture, data processing concepts (RDDs, DataFrames, Datasets)
    • Knowledge of spark-based workflows
    • Experience with data pipelines
    • Experience in designing, building, and maintaining robust and scalable ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines.
    • Data Understanding and Business Acumen
    • The ability to analyse and understand data, identify patterns, and troubleshoot data quality issues is crucial. Familiarity with data profiling techniques

     

    Job responsibilities

    • Developing postgres based central storage location as basis for long term data storage
    • Developing Standing up microservices to retain data based on tenant configuration and UI to enable customers to configure their retention policy
    • Creating the pipeline to transform the data from transactional database to format suited to analytical queries
    • Helping pinpoint and fix issues in data quality
    • Paricipating in code review sessions
    • Following client’s standards to the code and data qualities


    #Remote

    More
  • · 42 views · 6 applications · 15d

    Senior Data Engineer

    Full Remote · Poland · 5 years of experience · Upper-Intermediate
    Description Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our...

    Description

    Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our teams based in New York, Charlotte, Atlanta, London, Bengaluru, and remote work with a wide range of organizations in many industries, including Healthcare, Financial Services, Retail, Automotive, Aviation, and Professional Services.

     

    Method is part of GlobalLogic, a digital product engineering company. GlobalLogic integrates experience design and complex engineering to help our clients imagine what’s possible and accelerate their transition into tomorrow’s digital businesses. GlobalLogic is a Hitachi Group Company.

     

    Your role is to collaborate with multidisciplinary individuals and support the project lead on data strategy and implementation projects. You will be responsible for data and systems assessment, identifying the critical data and quality gaps required for effective decision support, and contributing to the data platform modernization roadmap. 

     

    Responsibilities:

    • Work closely with data scientists, data architects, business analysts, and other disciplines to understand data requirements and deliver accurate data solutions.
    • Analyze and document existing data system processes to identify areas for improvement.
    • Develop detailed process maps that describe data flow and integration across systems.
    • Create a data catalog and document data structures across various databases and systems.
    • Compare data across systems to identify inconsistencies and discrepancies.
    • Contribute towards gap analysis and recommend solutions for standardizing data.
    • Recommend data governance best practices to organize and manage data assets effectively.
    • Propose database design standards and best practices to suit various downstream systems, applications, and business objectives
    • Strong problem-solving abilities with meticulous attention to detail and experience. 
    • Experience with requirements gathering and methodologies. 
    • Excellent communication and presentation skills with the ability to clearly articulate technical concepts, methodologies, and business impact to both technical teams and clients.
    • A unique point of view. You are trusted to question approaches, processes, and strategy to better serve your team and clients.

     

    Skills Required 

    Technical skills

    • Proven experience (5+ years) in data engineering.
    • 5+ years of proven data engineering experience with expertise in data warehousing, data management, and data governance in SQL or NoSQL databases.
    • Deep understanding of data modeling, data architecture, and data integration techniques.
    • Advanced proficiency in ETL/ELT processes and data pipeline development from raw, structured to business/analytics layers to support BI Analytics and AI/GenAI models.
    • Hands-on experience with ETL tools, including: Databricks (preferred), Matillion, Alteryx, or similar platforms.
    • Commercial experience with a major cloud platform like Microsoft Azure (e.g., Azure Data Factory, Azure Synapse, Azure Blob Storage).

     

     

    Core Technology stack

    Databases

    • Oracle RDBMS (for OLTP): Expert SQL for complex queries, DML, DDL.
    • Oracle Exadata (for OLAP/Data Warehouse): Advanced SQL optimized for analytical workloads. Experience with data loading techniques and performance optimization on Exadata.

    Storage:

    • S3-Compatible Object Storage (On-Prem): Proficiency with S3 APIs for data ingest, retrieval, and management.

    Programming & Scripting:

    • Python: Core language for ETL/ELT development, automation, and data manipulation.
    • Shell Scripting (Linux/Unix): Bash/sh for automation, file system operations, and job control.

    Version Control: 

             Git: Managing all code artifacts (SQL scripts, Python code, configuration files).​​

    Related Technologies & Concepts:

    • Data Pipeline Orchestration Concepts: Understanding of scheduling, dependency management, monitoring, and alerting for data pipelines
    • Containerization: Docker, basic understanding of how containerization works
    • API Interaction: Understanding of REST APIs for data exchange (as they might need to integrate with the Java Spring Boot microservices).
       

    Location

    • Remote across Poland

     

    Why Method?

    We look for individuals who are smart, kind and brave. Curious people with a natural ability to think on their feet, learn fast, and develop points of view for a constantly changing world find Method an exciting place to work. Our employees are excited to collaborate with dispersed and diverse teams that bring together the best in thinking and making. We champion the ability to listen and believe that critique and dissonance lead to better outcomes. We believe everyone has the capacity to lead and look for proactive individuals who can take and give direction, lead by example, enjoy the making as much as they do the thinking, especially at senior and leadership levels.

    Next Steps

    If Method sounds like the place for you, please submit an application. Also, let us know if you have a presence online with a portfolio, GitHub, Dribbble, or another platform.

     

    * For information on how we process your personal data, please see Privacy: https://www.method.com/privacy/

    More
  • · 45 views · 1 application · 12d

    Senior Data Engineer

    Full Remote · Ukraine · 5 years of experience · Upper-Intermediate
    Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...
    • Project Description:

      The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.

      Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.

      Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

    • Responsibilities:

      We are looking for Data Engineer who will be responsible for designing a solution for a big retail company. The main focus is to support processing of big data volumes and integrate solution to current architecture.

    • Mandatory Skills Description:

      • Recent hands on experience with Azure Data Factory and Synapse.
      • Experience in leading a distributed team.
      • Strong expertise in designing and implementing data models, including conceptual, logical, and physical data models, to support efficient data storage and retrieval.
      • Strong knowledge of Microsoft Azure, including Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, and Azure Databricks, pySpark for building scalable and reliable data solutions.
      • Extensive experience with building robust and scalable ETL/ELT pipelines to extract, transform, and load data from various sources into data lakes or data warehouses.
      • Ability to integrate data from disparate sources, including databases, APIs, and external data providers, using appropriate techniques such as API integration or message queuing.
      • Proficiency in designing and implementing data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault)
      • Proficiency in SQL to perform complex queries, data transformations, and performance tuning on cloud-based data storages.
      • Experience integrating metadata and governance processes into cloud-based data platforms
      • Certification in Azure, Databricks, or other relevant technologies is an added advantage
      • Experience with cloud-based analytical databases.
      • Experience with Azure MI, Azure Database for Postgres, Azure Cosmos DB, Azure Analysis Services, and Informix.
      • Experience with Python and Python-based ETL tools.
      • Experience with shell scripting in Bash, Unix or windows shell is preferable.

    • Nice-to-Have Skills Description:

      • Experience with Elasticsearch
      • Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
      • Troubleshooting and Performance Tuning: Ability to identify and resolve performance bottlenecks in data processing workflows and optimize data pipelines for efficient data ingestion and analysis.
      • Collaboration and Communication: Strong interpersonal skills to collaborate effectively with stakeholders, data engineers, data scientists, and other cross-functional teams.

    • Languages:
      • English: B2 Upper Intermediate
    More
  • · 84 views · 17 applications · 12d

    Data Engineer(Billing Automation)

    Full Remote · Countries of Europe or Ukraine · Product · 2 years of experience · Intermediate
    We’re looking for a Data Engineer to support and analyze our Atlas CRM system. In this role, you’ll be responsible for running monthly invoicing processes, generating client reports, and collaborating closely with Sales, Account Management, Billing,...

    We’re looking for a Data Engineer to support and analyze our Atlas CRM system. In this role, you’ll be responsible for running monthly invoicing processes, generating client reports, and collaborating closely with Sales, Account Management, Billing, Legal, and Integration teams to gather and process key customer data.

     

    Responsibilities:

    • Develop and maintain Python scripts to retrieve and process data from APIs;
    • Clean and transform raw data into structured formats;
    • Troubleshoot and debug issues related to API requests and data processing;
    • Continuously improve the project/application by optimizing performance, enhancing features, and implementing best practices;
    • Generate, analyze, and visualize reports using Tableau to support business decisions;
    • Create and manage dashboards, filters, and data visualizations to provide insights;
    • Collaborate with teams to ensure data accuracy and system efficiency.

       

    Requirements:

    • 2+ years of working experience as a Data Engineer; 
    • Knowledge of Python, with a focus on data processing and automation;
    • Experience working with RESTful APIs, JSON and authentication mechanisms;
    • Understanding of data structures and usage of Pandas for data manipulation;
    • Experience with SQL for querying and managing datasets;
    • Ability to create and interpret reports and dashboards;
    • Basic understanding of financial concepts and working with financial data;
    • Basic understanding of SQL for working with structured data;
    • Basic experience with Docker is a plus.

     

    Benefits:

    🎰 Be part of the international iGaming industry – Work with a top European solution provider and shape the future of online gaming;

    💕 A Collaborative Culture – Join a supportive and understanding team;

    💰 Competitive salary and bonus system – Enjoy additional rewards on top of your base salary;

    📆 Unlimited vacation & sick leave – Because we prioritize your well-being;

    📈 Professional Development – Access a dedicated budget for self-development and learning;

    🏥 Healthcare coverage – Available for employees in Ukraine and compensation across the EU;

    🫂 Mental health support – Free consultations with a corporate psychologist;

    🇬🇧 Language learning support – We cover the cost of foreign language courses;

    🎁 Celebrating Your Milestones – Special gifts for life’s important moments;

    ⏳ Flexible working hours – Start your day anytime between 9:00-11:00 AM;

    🏢 Flexible Work Arrangements – Choose between remote, office, or hybrid work;

    🖥 Modern Tech Setup – Get the tools you need to perform at your best;

    🚚 Relocation support – Assistance provided if you move to one of our hubs.

    More
  • · 127 views · 9 applications · 12d

    Junior AI Data Engineer to $1300

    Full Remote · Ukraine · 2 years of experience · Intermediate
    Junior AI Data Engineer Location: Remote Type: Full-time/Contract Requirements 1–3 years of experience in a technical field (cybernetics, statistics, analytics, etc.) Strong knowledge of SQL and Python Familiarity with GitHub: branching, pull...

    Junior AI Data Engineer

    Location: Remote
    Type: Full-time/Contract
     

    🧠 Requirements

    • 1–3 years of experience in a technical field (cybernetics, statistics, analytics, etc.)
    • Strong knowledge of SQL and Python
    • Familiarity with GitHub: branching, pull requests, version control
    • Experience with API documentation and external integrations (REST, GraphQL)
    • Interest in Machine Learning and proficiency with AI tools (ChatGPT, Copilot, LangChain, etc.)
    • Basic frontend understanding (HTML/CSS/JS, JSON, API payloads)
    • Understanding of data engineering tools (Airflow, Kafka, ETL/ELT concepts)
    • Familiarity with cloud platforms (AWS preferred, GCP is a plus)
    • Solid grasp of financial KPIs (LTV, retention, CAC)
    • B1/B2 English proficiency
    • Responsible, self-motivated, and proactive in learning

     

     

    🔧 Responsibilities:

    • Work with APIs to extract, transform, and load data
    • Build and support data pipelines (ETL/ELT) for analytics and ML workflows
    • Connect frontend and backend systems by designing and maintaining pipelines that serve data to dashboards or apps
    • Deploy and support ML models in production environments
    • Create and modify PDFs using Python for reporting or client deliverables
    • Monitor, maintain, and optimize cloud-based databases
    • Interpret and implement API documentation for CRM and third-party tools
    • Use GitHub for collaborative development and code reviews
    • Track KPIs and generate actionable data insights
    • Communicate directly with project managers—clear tasks, quick feedback, ownership encouraged
    • Work flexibly: no fixed hours, just deadlines

     

    💡 About data212

    data212 is a pragmatic, fast-moving start-up offering full-service analytics, engineering, and AI support. We help businesses turn data into growth—lean, sharp, and fully remote.

    More
  • · 133 views · 18 applications · 11d

    Data Engineer

    Full Remote · Worldwide · Product · 3 years of experience · Pre-Intermediate
    Responsibilities: Design and develop ETL pipelines using Airflow and Apache Spark for Snowflake and Trino Optimize existing pipelines and improve the Airflow framework Collaborate with analysts, optimize complex SQL queries, and help foster a strong...

    Responsibilities:

    • Design and develop ETL pipelines using Airflow and Apache Spark for Snowflake and Trino
    • Optimize existing pipelines and improve the Airflow framework
    • Collaborate with analysts, optimize complex SQL queries, and help foster a strong data-driven culture
    • Research and implement new data engineering tools and practices

     

    Requirements:

    • Experience with Apache Spark
    • Experience with Airflow
    • Proficiency in Python
    • Familiarity with Snowflake and Trino is a plus
    • Understanding of data architecture, including logical and physical data layers
    • Strong SQL skills for analytical queries
    • English proficiency at B1/B2 level

     

    About the Project:

    We’re a fast-growing tech startup in the B2B marketing space, developing a next-generation platform for identifying and engaging target customers.

    Our product combines artificial intelligence, big data, and proprietary de-anonymization tools to detect behavioral signals from potential buyers in real time and convert them into high-quality leads.

    The team is building a solution that helps businesses identify “hot” prospects even before they express interest — making marketing and sales efforts highly targeted and personalized.

    More
Log In or Sign Up to see all posted jobs