Jobs

130
  • · 62 views · 8 applications · 28d

    Data Engineer

    Countries of Europe or Ukraine · 2 years of experience · Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. Skills requirements: • 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.

     

    Skills requirements:
    • 2+ years of experience with Python;
    • 2+ years of experience as a Data Engineer;
    • Experience with Pandas;
    • Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    • Familiarity with Amazon Web Services;
    • Knowledge of data algorithms and data structures is a MUST;
    • Working with high volume tables 10m+.


    Optional skills (as a plus):
    • Experience with Spark (pyspark);
    • Experience with Airflow;
    • Experience with Kafka;
    • Experience in statistics;
    • Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    • Create ETL pipelines and data management solutions (API, Integration logic);
    • Different data processing algorithms;
    • Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    • Great networking opportunities with international clients, challenging tasks;

    • Building interesting projects from scratch using new technologies;

    • Personal and professional development opportunities;

    • Competitive salary fixed in USD;

    • Paid vacation and sick leaves;

    • Flexible work schedule;

    • Friendly working environment with minimal hierarchy;

    • Team building activities, corporate events.

    More
  • · 34 views · 2 applications · 7d

    Team/ Tech Lead Data Engineer

    Full Remote · Worldwide · 5 years of experience · Upper-Intermediate
    Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. As a Team Lead, you will be an expert and...

    Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.

     

    As a Team Lead, you will be an expert and a leader, playing a crucial role in guiding the development team, making technical decisions, and ensuring the successful delivery of high-quality software products.

     

    Skills requirements:

    • 5+ years of experience with Python;

    • 4+ years of experience as a Data Engineer;

    • Knowledge of data algorithms and data structures is a MUST;

    • Excellent experience with Pandas;

    • Excellent experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;

    • Experience Apache Kafka, Apache Spark (pyspark);

    • Experience with Hadoop;

    • Familiarity with Amazon Web Services;

    • Understanding of cluster computing fundamentals;

    • Working with high volume tables 100m+.

     

    Optional skills (as a plus):

    • Experience with scheduling and monitoring (Databricks, Prometheus, Grafana);

    • Experience with Airflow;

    • Experience with Snowflake, Terraform;

    • Experience in statistics;

    • Knowledge of DS and Machine learning algorithms.

     

    Key responsibilities:

    • Manage the development process and support team members;

    • Conduct R&D work with new technology;

    • Maintain high-quality coding standards within the team;

    • Create ETL pipelines and data management solutions (API, Integration logic);

    • Elaborate different data processing algorithms;

    • Involvement in creation of forecasting, recommendation, and classification models;

    • Develop and implement workflows for receiving and transforming new data sources to be used in the company;

    • Develop existing Data Engineering infrastructure to make it scalable and prepare it for anticipated projected future volumes;

    • Identify, design and implement process improvements (i.e. automation of manual processes, infrastructure redesign, etc.).

     

    We offer:

    • Great networking opportunities with international clients, challenging tasks;

    • Building interesting projects from scratch using new technologies;

    • Personal and professional development opportunities;

    • Competitive salary fixed in USD;

    • Paid vacation and sick leaves;

    • Flexible work schedule;

    • Friendly working environment with minimal hierarchy;

    • Team building activities, corporate events.

    More
  • · 41 views · 0 applications · 13d

    Big Data Engineer (Azure, Databricks)

    Full Remote · Ukraine · 4 years of experience · Upper-Intermediate
    We are seeking a skilled Big Data Engineer to become an essential part of this forward-looking and dynamic team. In this role, you will leverage advanced cloud technologies to deliver impactful solutions. Responsibilities Build scalable, cloud-based...

    We are seeking a skilled Big Data Engineer to become an essential part of this forward-looking and dynamic team.

    In this role, you will leverage advanced cloud technologies to deliver impactful solutions.

     

    Responsibilities

    • Build scalable, cloud-based systems in Azure using tools like Azure Data Factory, Azure Data Lake Storage, and Databricks to enhance seamless, high-performance data pipelines
    • Migrate on-premises MS SQL databases to Azure Data Lake, utilizing the Delta Lake format to optimize operational performance
    • Develop interfaces that connect non-Microsoft proprietary applications, enabling interoperability and unlocking valuable data insights
    • Provide expertise in Data Lakes, Data Warehouses (DWH), and Delta Lakehouse architectures to guide transformative business solutions
    • Assess new feature proposals, prioritize them based on business value, and collaborate to make impactful product development decisions
       

    Requirements

    • Expertise in Azure Data Services, including Data Factory, Data Lake Storage, and Databricks
    • Strong SQL skills with experience in Python, Scala, or C# (versatility is highly valued)
    • Background in working with Agile or XP methodologies, thriving in fast-moving, adaptive environments
    • Good English communication skills (Upper-Intermediate/B2+ or higher) to engage effectively with diverse teams and stakeholders

     

    More
  • · 48 views · 7 applications · 23 June

    Senior Software Data Engineer

    Full Remote · Worldwide · Product · 7 years of experience · Upper-Intermediate
    Join Burny Games — a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily. What makes us proud? In just two years, we've launched two successful mobile games worldwide:...

    Join Burny Games — a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players' minds daily.

    What makes us proud?

    • In just two years, we've launched two successful mobile games worldwide: Playdoku and Colorwood Sort. We have paused some projects to focus on making our games better and helping our team improve.
    • Our games have been enjoyed by over 8 million players worldwide, and we keep attracting more players.
    • We've created a culture where we make decisions based on data, which helps us grow every month.
    • We believe in keeping things simple, focusing on creativity, and always searching for new and effective solutions.


    We are seeking an experienced software engineer to create a high-performance, scalable, and flexible real-time analytics platform.
    You will be a key member of our team, responsible for the architecture, development, and optimization of services for processing and analyzing large volumes of data (terrabytes).
     

    Required professional experience:

    • 5+ years of experience in developing distributed systems or systems at scale.
    • Willingness to upskill on Go, proficient in one of languages: Go, Python, Java/Scala/Kotlin, Rust.
    • Rock solid computer science fundamentals.
    • Experience with any NoSQL (preferably Cassandra) and OLAP (preferably ClickHouse) databases.
    • Experience with distributed log-based messaging system (one of: Kafka, NATS JetStream, etc)
    • Experience with Kubernetes (Helm, ArgoCD).
       

    Desired Skills:

    • Experience with common networking protocols.
    • Experience working with observability tools, such as metrics and traces.
    • Database fundamentals.
    • Understanding of scalable system design principles and architectures for real-time data processing.
    • Experience with distributed processing engine (one of: Flink, Spark).
    • Experience with open table format (one of: Apache Iceberg, Delta Lake, Hudi).
    • Experience with cloud platforms (one of: Google Cloud, AWS, Azure).
       

    Key Responsibilities:

    • Design and develop the architecture of an behavioral analytics platform for real-time big data processing.
    • Implement key engine systems (data collection, event processing, aggregation, prepare data for visualization).
    • Optimize the platform performance and scalability for handling large data volumes.
    • Develop tools for user behavior analysis and product metrics.
    • Collaborate with data analysts and product managers to integrate the engine into analytics projects.
    • Research and implement new technologies and methods in data analysis.
       

    What we offer:

    • 100% payment of vacations and sick leave [20 days vacation, 22 days sick leave], medical insurance.
    • A team of the best professionals in the games industry.
    • Flexible schedule [start of work from 8 to 11, 8 hours/day].
    • L&D center with courses.
    • Self-learning library, access to paid courses.
    • Stable payments.
       

    The recruitment process:

    CV review → Interview with talent acquisition manager → Interview with hiring manager → Job offer.

    If you share our goals and values and are eager to join a team of dedicated professionals, we invite you to take the next step.

    More
  • · 84 views · 2 applications · 23d

    Data Engineer (Azure)

    Full Remote · Countries of Europe or Ukraine · 2 years of experience · Upper-Intermediate
    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...

    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.

    Key Responsibilities:
    - Create and manage scalable data pipelines with Azure SQL and other databases;
    - Use Azure Data Factory to automate data workflows;
    - Write efficient Python code for data analysis and processing;
    - Ability to develop data reports and dashboards using PowerBI;
    - Use Docker for application containerization and deployment streamlining;
    - Manage code quality and version control with Git.

    Skills requirements:
    - 3+ years of experience with Python;
    - 2+ years of experience as a Data Engineer;
    - Strong SQL knowledge, preferably with Azure SQL experience;
    - Python skills for data manipulation;
    - Expertise in Docker for app containerization;
    - Familiarity with Git for managing code versions and collaboration;
    - Upper- intermediate level of English.

    Optional skills (as a plus):
    - Experience with Azure Data Factory for orchestrating data processes;
    - Experience developing APIs with FastAPI or Flask;
    - Proficiency in Databricks for big data tasks;
    - Experience in a dynamic, agile work environment;
    - Ability to manage multiple projects independently;
    - Proactive attitude toward continuous learning and improvement.

    We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
  • · 25 views · 0 applications · 17d

    Senior Data Engineer

    Full Remote · Ukraine · 4 years of experience · Upper-Intermediate
    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible...

    N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible for the architecture, development, and management of our core data infrastructure. We leverage Snowflake, Looker, Airflow (MWAA), and dbt while managing DevOps configurations for the platform. Our goal is to build and maintain a self-serve data platform that empowers stakeholders with tools for efficient data management while ensuring security, governance, and compliance standards.
     

    Requirements:

    • 6+ years of experience in Data Engineering.
    • Strong proficiency in Airflow, Python, and SQL.
    • Hands-on experience with cloud data warehouses (Snowflake or equivalent).
    • Solid understanding of AWS services and Kubernetes at an advanced user level.
    • Familiarity with Data Quality and Observability best practices.
    • Ability to thrive in a dynamic environment with a strong sense of ownership and responsibility.
    • Analytical mindset and problem-solving skills for tackling complex technical challenges.
    • Bachelor's in Mathematics, Computer Science,e or other relevant quantitative fields
       

    Nice-to-Have Skills:

    • Experience with DevOps practices, CI/CD, and Infrastructure as Code (IaC).
    • Hands-on experience with Looker or other BI tools.
    • Performance optimization of large-scale data pipelines.
    • Knowledge of metadata management and Data Governance best practices.
       

    Responsibilities:

    • Design and develop a scalable data platform to efficiently process and analyze large volumes of data using Snowflake, Looker, Airflow, and dbt.
    • Enhance the self-serve data platform by implementing new features to improve stakeholder access and usability.
    • Work with cross-functional teams to provide tailored data solutions and optimize data pipelines.
    • Foster a culture of knowledge sharing within the team to enhance collaboration and continuous learning.
    • Stay updated on emerging technologies and best practices in data engineering and bring innovative ideas to improve the platform.
    More
  • · 50 views · 10 applications · 26d

    Data Engineer

    Full Remote · Worldwide · 5 years of experience · Advanced/Fluent
    Requirements: • Develop and maintain data pipelines and ETLs. • Support the development and maintenance of data visualization solutions for the developed data products. • Build and maintain cloud infrastructure for multiple solutions using various AWS...

    Requirements: 

    • Develop and maintain data pipelines and ETLs. 

    • Support the development and maintenance of data visualization solutions for the developed data products. 

    • Build and maintain cloud infrastructure for multiple solutions using various AWS services through AWS CDK written in Python. 

    • Build reusable components for multiple solutions. 

    • Design, build, and implement data quality checks. 

    • Gather and translate business requirements into technical requirements. 

     

     

    We offer:

    • Attractive financial package

    • Challenging projects

    • Professional & career growth

    • Great atmosphere in a friendly small team

    More
  • · 42 views · 4 applications · 17d

    Data Engineer

    Ukraine · 4 years of experience · Upper-Intermediate
    On behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer. Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this...

    On behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer. 

     

    Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this organization from a traditional bank into a new era of fintech, working on the edge of what current fintech may offer.

     

    Requirements:

    • Experience with ETL/ELT
    • Proficiency in Glue and Spark
    • Strong programming skills in Python and SQL
    • Hands-on experience with MWAA / Airflow
    • Good understanding of AWS Basics (IAM, S3)
    • Experience working with Aurora and PostgreSQL
    • Knowledge of Kafka / MSK, including Kafka Connect and Debezium
    • Familiarity with Lake Formation
    • Experience using Glue Data Catalog
    • Solid understanding of data modeling principles
    • Experience with Glue Streaming
    • Level of English – Upper-Intermediate and higher

       

    Nice to have:

    • Previous experience working in the fintech industry

     

    🐳In return we offer:

    • The friendliest community of like-minded IT-people
    • Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies
    • Perfect office location in the city-center (900m from Lukyanivska metro station with a green and spacious neighborhood) or remote mode engagement: you can choose a convenient one for you, with a possibility to fit together both
    • No open-spaces setup – separate rooms for every team’s comfort and multiple lounge and gaming zones 
    • Neverending fun: sports events, tournaments, music band, multiple affinity groups
       

    🐳 Come on board, and let’s grow together! 🐳

    More
  • · 173 views · 13 applications · 7d

    Junior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 0.5 years of experience · Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    • 6+ months of experience as a Data Engineer

    • Experience with SQL ;

    • Experience with Python;

     

     

    Optional skills (as a plus):

    • Experience with ETL / ELT pipelines;

    • Experience with PySpark;

    • Experience with Airflow;

    • Experience with Databricks;

     

    Key Responsibilities:

    • Apply data processing algorithms;

    • Create ETL/ELT pipelines and data management solutions;

    • Work with SQL queries for data extraction and analysis;

    • Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    • Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    • Opportunity to work with the high-skilled engineering team on challenging projects;

    • Interesting projects with new technologies;

    • Great networking opportunities with international clients, challenging tasks;

    • Building interesting projects from scratch using new technologies;

    • Personal and professional development opportunities;

    • Competitive salary fixed in USD;

    • Paid vacation and sick leaves;

    • Flexible work schedule;

    • Friendly working environment with minimal hierarchy;

    • Team building activities, corporate events.

    More
  • · 57 views · 5 applications · 23d

    Data Engineer

    Full Remote · Ukraine · Product · 3 years of experience · Intermediate
    We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data...

    We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data sources. Your primary focus will be to enable efficient data flow and support analytical capabilities across the organization. You will also contribute to the development of our data architecture, implement best engineering practices, and collaborate closely with cross-functional teams to turn raw data into actionable insights.

     

    Responsibilities

    • Communicate with both technical and non-technical audiences to gather requirements
    • Review and analyze data and logic to ensure consistency and accuracy
    • Design, implement, and maintain data pipelines for efficient data flow
    • Integrate and  support of developed solutions
    • Research and evaluate third-party components for potential use
    • Follow best engineering practices: refactoring, code review, testing, continuous delivery, and Scrum
    • Design, optimize, and support of data storage

     

    Requirements

    • At least 5+ years of experience in data engineering
    • Experience in requirement gathering and communication with stakeholders
    • Strong knowledge of DWH (data warehouse) architecture and principles
    • Practical experience building ETL pipelines and designing data warehouses
    • Deep experience with Python with a strong focus on PySpark
    • Proficiency in SQL and databases such as PostgreSQL, ClickHouse, MySQL
    • Hands-on experience with data scraping and integrating third-party sources and APIs
    • Solid understanding of software design patterns, algorithms, and data structures
    • Intermediate English proficiency

     

    Will be a plus

    • Experience with RabbitMQ or Kafka
    • Understanding of web application architecture
    • Familiarity with DataOps practices
    • Background in FinTech or Trading domains

     

    We offer

    • Tax expenses coverage for private entrepreneurs in Ukraine
    • Expert support and guidance for Ukrainian private entrepreneurs
    • 20 paid vacation days per year
    • 10 paid sick leave days per year
    • Public holidays as per the company's approved Public holiday list
    • Medical insurance
    • Opportunity to work remotely
    • Professional education budget
    • Language learning budget
    • Wellness budget (gym membership, sports gear and related expenses)


     

    More
  • · 60 views · 10 applications · 23 June

    Data Engineer

    Full Remote · Worldwide · 5 years of experience · Upper-Intermediate
    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others. About project: Advanced...

    Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.

    About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime

    • Requirements:
      • 6+ years of experience with Python backend development
        Solid knowledge of SQL (including writing/debugging complex queries)
        Understanding of data warehouse principles and backend architecture
      • Experience working in Linux/Unix environments
        Experience with APIs and Python frameworks (e.g., Flask, FastAPI)
      • Experience with PostgreSQL
      • Familiarity with Docker
      • Basic understanding of unit testing
      • Good communication skills and ability to work in a team
      • Interest in blockchain technology or willingness to learn
      • Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
      • Strong problem-solving skills and the ability to work independently
    • Responsibilities:
      • Integrate new blockchainsAMM protocols, and bridges into the our platform
      • Build and maintain data pipelines and backend services
      • Help implement new tools and technologies into the system
      • Participate in the full cycle of feature development – from design to release
      • Write clean and testable code
      • Collaborate with the team through code reviews and brainstorming
    • Nice to Have:
      • Experience with KafkaSpark, or ClickHouse
      • Knowledge of KubernetesTerraform, or Ansible
      • Interest in cryptoDeFi, or distributed systems
      • Experience with open-source tools
      • Some experience with Java or readiness to explore it
    • What we offer:
      • Remote working format 
      • Flexible working hours
      • Informal and friendly atmosphere
      • The ability to focus on your work: a lack of bureaucracy and micromanagement
      • 20 paid vacation days
      • 7 paid sick leaves
      • Education reimbursement
      • Free English classes
      • Psychologist consultations
    • Recruitment process:

      Recruitment Interview – Technical Interview

    More
  • · 40 views · 3 applications · 22d

    Data Engineer (6 months, Europe-based)

    Full Remote · EU · 4 years of experience · Upper-Intermediate
    The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives. Key responsibilities: Develop data products on GCP using BigQuery and DBT Integrate...

    The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives.

     

    Key responsibilities:

    • Develop data products on GCP using BigQuery and DBT
    • Integrate data from multiple sources using Python and Cloud Functions
    • Orchestrate pipelines with Terraform and Cloud Workflows
    • Collaborate with Solution Architects, Data Scientists, and Software Engineers

     

    Tech stack:
    GCP (BigQuery, Cloud Functions, Cloud Workflows), DBT, Python, Terraform, Git

     

    Requirements:
    Ability to work independently and within cross-functional teams; 
    Strong hands-on experience;
    English: Upper Intermediate or higher

     

    Nice to have:
    Experience with OLAP cubes and PowerBI

     

    More
  • · 23 views · 0 applications · 16d

    Senior Data Engineer

    Full Remote · Ukraine · 5 years of experience · Intermediate
    Job Description Strong experience in design, building, and maintaining data pipelines using Databricks Workflows for data ingestion and transformation using PySpark Design, create and maintain data pipelines that leverage Delta tables for efficient data...

    Job Description

    Strong experience in design, building, and maintaining data pipelines using Databricks Workflows for data ingestion and transformation using PySpark
    Design, create and maintain data pipelines that leverage Delta tables for efficient data storage and processing within a Databricks environment
    Experience with Unity Catalog
    Experience with RDBMS, such as MS SQL or MySQL, as well as NoSQL
    Data modeling and schema design
    Proven understanding and demonstrable implementation experience in Azure (Databricks + Key Vault + ADLS Gen 2)
    Excellent interpersonal and teamwork skills
    Strong problem-solving, troubleshooting and analysis skills
    Good knowledge of Agile Scrum

     

    MAST HAVE SKILLS

    Databricks, PySpark, MS SQL, ADLS Gen 2, Unity Catalog

    Job Responsibilities

    Responsible for the design and implementation of key components in the system.
    Takes ownership of features, leads design decisions
    Peer-review the code and provide constructive feedback
    Takes part in defining technical strategies and best practices for the team
    Assists with backlog refinement and estimation at story level
    Identifies and resolves bottlenecks in the development process (such as performance bottlenecks)
    Solves complex tasks without supervision.

    Department/Project Description

    GlobalLogic is searching for a motivated, results-driven, and innovative engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

    More
  • · 57 views · 4 applications · 22d

    Senior Data Engineer

    Full Remote · Countries of Europe or Ukraine · 4 years of experience · Upper-Intermediate
    Our long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...

    Our long-standing client from the UK is looking for a Senior Data Engineer 

     

    Project: Decommissioning legacy software and systems

     

    Tech stack:
    DBT, Snowflake, SQL, Python, Fivetran

     

    Requirements:

    • Solid experience with CI/CD processes in SSIS
    • Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
    • Experience with AWS (preferred) or Azure
    • Communicative and proactive team player — able to collaborate and deliver
    • Independent and flexible when switching between projects
    • English: Upper Intermediate or higher
    More
  • · 72 views · 11 applications · 25d

    Data Engineer\Data Analyst

    Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience
    NuxGame works with iGaming operators of all scales helping companies access new markets or enhance their existing brands. As a casino gaming software company, NuxGame provides solutions that allow building outstanding brands and fulfilling your business...

    NuxGame works with iGaming operators of all scales helping companies access new markets or enhance their existing brands. As a casino gaming software company, NuxGame provides solutions that allow building outstanding brands and fulfilling your business goals.We are looking for a Data Engineer\Data Analyst to join our team.

     

    Responsibilities
     

    • Design, build, and maintain robust data pipelines for large-scale processing.
    • Develop and optimize ETL workflows and data ingestion from various sources (DBs, APIs, event streams).
    • Create and maintain data models and schemas tailored for analytics and reporting.
    • Collaborate with analysts and business teams to understand reporting needs and deliver automated dashboards.
    • Build high-quality reports and dashboards using BI tools.
    • Own and ensure data quality, consistency, and freshness.
    • Implement data security best practices, access controls, and data governance.
    • Improve and monitor data infrastructure performance (e.g., ClickHouse, BigQuery).
    • Work with event-based data (web tracking) to enable product and marketing analytics.
    • Collaborate closely with DevOps and Engineering to deploy and scale data solutions.
    • Investigate new technologies and tools to enhance our data ecosystem.

       

    Experience:
     

    • 3+ years of experience as a Data Analyst, Data Engineer, or in a hybrid role.
    • Solid knowledge of SQL and experience with NoSQL databases.
    • Proven experience building data pipelines and ETL processes from scratch.
    • Hands-on experience with modern Data Warehouses (e.g., ClickHouse, BigQuery, Snowflake).
    • Familiarity with workflow orchestration tools like Airflow, dbt, or similar.
    • Experience working with event-based data (e.g., user behavior tracking).
    • Proficiency in Python for data manipulation and transformation.
    • Experience building custom data connectors or integrating APIs.
    • Strong knowledge of BI tools — especially Tableau, Power BI, or similar.
    • Understanding of cloud platforms (GCP, AWS, or Azure).
    • Familiarity with Git, Docker, and containerized environments.

       

    Nice to have:
     

    • Experience working in the gambling or betting industry — or deep interest in gaming data.
    • Practical knowledge of ClickHouse Cloud, ClickPipes, and related tools.
    • Exposure to data streaming platforms (e.g., Apache Kafka).
    • Understanding of DevOps and automation pipelines.
    • Bachelor's degree in Computer Science, Data Science, Math, or a related field.

       

    What We Offer:
     

    • Work Format: Remote work format.
    • Working Hours: Typically 09:00/10:00 to 17:00/18:00 (Kyiv time) (Monday-Friday).
    • Compensation: Timely payment of competitive wages (salary).
    • Employment: Official employment.
    • Leave: 24 days of vacation annually.
    • Team Environment: A friendly team and pleasant atmosphere without pressure or stress; open and democratic work organization.
    • Projects: Interesting work on successful projects within the dynamic iGaming sector
    More
Log In or Sign Up to see all posted jobs