Jobs

139
  • Β· 211 views Β· 37 applications Β· 23d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    About the Company Innovative AI FInTech company is a venture-backed AI-powered platform built specifically for consumer brands. It connects to sources like banks, POS systems, and advertising platforms to deliver daily profit & loss snapshots, cash flow...

    About the Company

    Innovative AI FInTech company is a venture-backed AI-powered platform built specifically for consumer brands. It connects to sources like banks, POS systems, and advertising platforms to deliver daily profit & loss snapshots, cash flow plans, and peer benchmarking - all in real time. With integrations across finance, e‑commerce, and marketing tools, it provides the clarity and automation brands need to grow confidently 

     

    What You’ll Do

    • Build and maintain data pipelines and backend services in Python, handling ingestion, transformation, validation, and storage.
    • Use dbt along with SQL to define models, write tests, document metrics, and orchestrate transformation logic in the data warehouse.
    • Architect and deploy scalable, reliable systems to support large volumes of financial data.
    • Deliver end-to-end data workflows: own schema design, ETL processing, monitoring, and error handling.
    • Make thoughtful architecture decisions to optimize for performance, maintainability, and security.
    • Work cross-functionally with product, engineering, analytics, and finance teams to translate business needs into data solutions.
    • Develop tooling and dashboards to monitor pipeline health, data quality, and system performance.
    • Mentor junior team members and promote strong engineering practices in testing, documentation, and deployment.

     

    What We’re Looking For

    • 4+ years of experience in data engineering or backend software engineering roles.
    • Proficiency in Python, with strong knowledge of data structures, orchestrators (dagster, airflow) and dbt.
    • Experience building and maintaining ELT pipelines, ingesting raw API data and converting it into clean, unified metrics.
    • Familiarity with ELT frameworks/tools like Airbyte, including connector setup and pipeline monitoring
    • Experience designing and optimizing data pipelines - ingesting, transforming, and validating data at scale.
    • Proven ability to work independently, owning major components with minimal supervision.
    • Solid database skills: SQL, data modeling, familiarity with both relational (e.g., Postgres) and NoSQL systems.
    • Experience deploying data solutions in cloud environments (e.g., AWS, GCP) and familiarity with containerization (Docker, Kubernetes).
    • Strong problem-solving, debugging, and monitoring practices - logs, metrics, tracing, alerting.
    • Excellent communication skills; ability to collaborate across teams and articulate technical decisions.
    • Work with data from a variety of integrations, including Shopify, Amazon, Facebook, TikTok, QuickBooks, Xero, NetSuite, and more - ensuring consistent and accurate ingestion across multiple platforms

     

    What We Offer

    • Competitive salary aligned with market standards, with annual salary reviews.
    • friendly, collaborative startup environment where you can drive impact in a mission-critical space for growing consumer brands.
    • High-growth opportunity - join early and help shape a cutting-edge product used by high-velocity, finance-driven companies.
    • Access to modern tech stack, including serverless/cloud-native solutions, event-driven pipelines, and experimentation with LLMs/AI for data validation.
    • Generous time off, flexible work arrangements, and commitment to work-life balance.
    More
  • Β· 38 views Β· 6 applications Β· 12h

    Senior Data Engineer with Azure

    Full Remote Β· Ukraine, Poland, Romania Β· 5 years of experience Β· B2 - Upper Intermediate
    Description Our customer is a manufacturing company who has numerous warehouses around the globe. The goal of the project is to provide customer with extensive analytical tools. It requires us to build custom APIs that will integrate with customer’s SAP,...

    Description

    Our customer is a manufacturing company who has numerous warehouses around the globe. The goal of the project is to provide customer with extensive analytical tools. It requires us to build custom APIs that will integrate with customer’s SAP, query data from there, apply specific transformations, and build reports in PowerBI based on that processsed data to help customer to:

    • Understand current production needs
    • Be able to collect as much data as needed to understand new production requests
    • Compare production needs with materials availability
    • Identify existing bottle-necks from processes perspective
    • Help to build forecasts

     

    Requirements

    Must-Haves:

    • 5+ years of experience w/ Python-based ETL processing and other data engineering tasks
    • Practical skills in developing from scratch APIs for data extraction from 3rd party systems
    • Proficiency w/ Azure Cloud and Azure Data Factory (or similar tools for orchestrating and automating data pipelines execution)
    • Solid expertise in SQL and relational DBs
    • Experience in database design and optimization
    • Experience with NoSQL DBs (MongoDB, Cosmos, etc.) for handling unstructured and semi-structured data
    • Contributing to release management following the best CI/CD practices
    • Nice to have: Experience w/ SAP S/4HANA is a major benefit

       

    Job responsibilities

    • Responsible for design, optimization and maintenance of data integration pipelines
    • Build API-based integrations for data extraction
    • Implement data transformation conditions according to business requirements
    • Data ingestion
    • Contribute to requirements analysis, design, decomposition
    • Proactively contribute to code quality improvement through design optimization, unit testing, etc.


     

    More
  • Β· 14 views Β· 1 application Β· 9d

    Data practice leader

    EU Β· 8 years of experience Β· C1 - Advanced
    Provectus is seeking a Data Practice Lead to drive the growth and leadership of our Data practice. This is a key leadership role responsible for managing a team of 25–30 data professionals and shaping the strategic direction of data initiatives across the...

    Provectus is seeking a Data Practice Lead to drive the growth and leadership of our Data practice. This is a key leadership role responsible for managing a team of 25–30 data professionals and shaping the strategic direction of data initiatives across the company.

    Location: Novi Sad, Serbia, remote, office is voluntary (CANDIDATES FROM OTHER LOCATIONS WONT BE CONSIDERED)

    About the Role

    As the Data Practice Lead, you will:

    • Lead and develop a high-performing team of 25–30 data engineers and analysts.
    • Define the vision and strategic roadmap for the Data practice, aligning with the company’s overall objectives.
    • Oversee the delivery of complex data engineering, analytics, and AI/ML solutions for enterprise clients.
    • Partner with pre-sales and delivery teams to design innovative data solutions and win new business.
    • Act as a key point of contact for stakeholders, ensuring alignment, transparency, and excellence in execution.
    • Foster a culture of technical excellence, collaboration, and continuous improvement.
       

    This role is ideal for a seasoned leader with a strong technical foundation in data and cloud technologies, combined with proven experience in managing large teams and driving strategic initiatives.

    Key Responsibilities

    • Build, mentor, and scale the Data practice team to support current and future business needs.
    • Drive innovation within the Data practice by adopting emerging technologies and best practices.
    • Ensure high-quality project delivery, meeting client expectations and timelines.
    • Collaborate with other practice leads and executives to align cross-practice initiatives.
    • Participate in pre-sales activities, including solution architecture design and client presentations.
    • Track and manage KPIs for team performance, client satisfaction, and financial targets.
       

    Required Skills & Experience

    • 8+ years of experience in data engineering, analytics, or related fields.
    • 3+ years in a leadership role managing teams of 20+ people.
    • Hands-on expertise in data architectures, cloud platforms (preferably AWS), and modern data engineering tools.
    • Strong understanding of data pipelines, analytics solutions, and AI/ML workflows.
    • Excellent leadership, communication, and stakeholder management skills.
    • Ability to work in a fast-paced, dynamic environment and make decisions under pressure.
    • Experience in building and scaling data practices in consulting or service companies.
    • Fluency in Russian is preferred to support team communication.

    Preferred Qualifications

    • Exposure to multi-region team management.
    • Familiarity with pre-sales and business development processes.

    What We Offer

    • Competitive salary and performance bonus 
    • Opportunity to lead a growing practice and shape its strategic direction
    • Work with cutting-edge technologies and global enterprise clients.
    • Collaborative and innovative work culture.

    About Provectus

    Provectus is a global technology company specializing in AI, Data, and Cloud transformation. We partner with leading enterprises to deliver innovative solutions that drive business success. Our culture fosters collaboration, innovation, and growth, making Provectus an ideal place for talented individuals to thrive.

     

     

    More
  • Β· 28 views Β· 0 applications Β· 15d

    Senior Data Engineer with Data Science/MLOps background

    Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    N-iX is seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...

    N-iX is seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. Your background in machine learning and data science will be valuable in optimizing data workflows, enabling efficient model deployment, and supporting AI-driven initiatives.  The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement, and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Collaborate with Data Scientists to facilitate model deployment and integration into production environments.
    • Support the implementation of basic ML Ops practices, such as model versioning and monitoring.
    • Assist in optimizing data pipelines to improve machine learning workflows.
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

       

    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript

       

    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python;
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Hands-on experience in containerization technologies (e.g., Docker, Kubernetes);
    • Familiarity with ML Ops concepts, including model deployment and monitoring.
    • Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
    • Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
    • Experience working with feature engineering and data preparation for machine learning models.
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.

       

    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Web Service Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits
    More
  • Β· 21 views Β· 0 applications Β· 3d

    Back End and Data Engineer

    Hybrid Remote Β· Spain, Ukraine Β· Product Β· 3 years of experience Β· C1 - Advanced
    Back End Data Engineer Company: NeuCurrent Location: Remote / Hybrid NeuCurrent is an AI powered CRM and omnichannel marketing platform for retailers. NeuCurrent empowers brands to connect with their customers more intelligently – through real-time...

    Back End Data Engineer

     

    Company: NeuCurrent

     

    Location: Remote / Hybrid

     

    NeuCurrent is an AI powered CRM and omnichannel marketing platform for retailers. NeuCurrent empowers brands to connect with their customers more intelligently – through real-time personalization across email, SMS, WhatsApp, and push. 

     

    We are looking for a motivated, bright and independent Data and Back End Engineer with at least 3 years experience who is willing to take broad responsibilities across product development and wants to progress quickly to a senior role within a company. 

     

    Why you should apply:

    This is a challenging job. Building something new is not easy.

     

    We are building a new product.

     

    It takes time to grow and get results.

     

    Working for a start up you are going through lots of challenges.

     

    We often multi-task as we grow our company.

     

    BUT If you want to be part of the fast growing business and team, generate your own ideas, and see the direct impact of your work on the business of the users, you should consider applying for this job. 

     

    Apply if you want to try yourself in something new!

     

    About company:

    At NeuCurrent we are building a disruptive technology for retail which enables all retailers to implement data driven customer retention without expensive data analytics and marketing resources. We are actively expanding in Europe at present. 

     

    Our values:

    We care for our people and customers

    We develop innovation that generates real benefit

    We are not afraid to take risks and see failure as a learning opportunity

    We are determined and always look for solutions to move forward

     

    Responsibilities and Tech Stacks:

    Data Processing

     

    • Maintain and improve data collection pipelines, which are the backbone of our product
    • Create new integrations
    • Maintain DWH architecture
    • Support and improve Kubernetes infrastructure on GCP
    • Developer infrastructure for ML pipelines

     

    Tech stack:

    Redis, Argo

    Python 3, Pandas, Numpy, Sqlalchemy

    PostgreSQL, Google BigQuery

    Docker, Kubernetes, Google Cloud Platform

    Flask

     

    Backend Development

     

    • Support and further develop the product backend API
    • Developer infrastructure for ML pipelines

     

    Tech stack:

    Python, Flask

    PostgreSQL

    Docker, Kubernetes, Google Cloud Platform

     

    ML and Data Analysis 

     

    • Improve and support ML models of product recommendation engine
    • Developing dashboards

     

    Tech stack:

    Collaborative filtering models

    Content and hybrid recommendation models

    Learn to rank models

    Sequence prediction neural network models

    Python, Jupyter, pandas

    BigQuery

    Data Build tool

    Looke studio   

     

    Other Requirements:

    Work closely with the founders on product development and improvement

    Actively contribute ideas to product innovation including AI implementation

    Customer focused 

    An excellent / good working English is a must as you will deal with the English speaking customers

     

    Personal Qualities:

    A self-starter who has the experience and confidence to act independently

    Have a positive mindset and attitude to solve problems and represent NeuCurrent with our customers

    A strong team player who is keen to learn and share her/his knowledge

    Enthusiasm and desire to progress professionally to a managerial role quickly

    Be structured and a good communicator 

     

    What we offer:

    You will have a lot of freedom in your field to create direct impact on our clients and NeuCurrent business and product development

    We have a young, highly motivated, and talented team

    You will become an integral part of new product development

    You will have plenty of opportunities to make your own decisions

    You will have unlimited opportunities to progress to managerial roles in our fast growing company

    Options / shares are available for a proven candidate

     

    More
  • Β· 28 views Β· 0 applications Β· 14d

    Data Engineer

    Hybrid Remote Β· Poland Β· Product Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re Bringg! A delivery management leader, serving 800+ customers globally. Leading enterprise retailers and brands use Bringg to grow their delivery capacity, reduce costs, and improve customer experiences. Every year, we process over 200 million orders...

    We’re Bringg! A delivery management leader, serving 800+ customers globally. Leading enterprise retailers and brands use Bringg to grow their delivery capacity, reduce costs, and improve customer experiences. Every year, we process over 200 million orders through our smart, automated omnichannel platform experience. 

    We are seeking a forward-thinking Data Engineer to join our team during a pivotal time of transformation. As we redesign our data pipeline solutions, you will play a key role in shaping and executing our next-generation data infrastructure. This is a hands-on, architecture-driven role ideal for someone ready to help lead this evolution.

     

    In this role, you will:

    • Drive the design and architecture of scalable, efficient, and resilient batch and streaming data pipelines.
    • Shape the implementation of modern, distributed systems to support high-throughput data processing and real-time analytics.
    • Collaborate cross-functionally with data scientists, engineers, and product stakeholders to deliver end-to-end data-driven capabilities.
    • Optimize legacy systems during the migration phase, ensuring a seamless transition with minimal disruption.
    • Contribute to DevOps and MLOps processes and enhance the reliability, monitoring, and automation of data infrastructure.
    • Support the integration and deployment of AI/ML models within the evolving data platform.

     

    What We’re Looking For

    • 4+ years of experience building and maintaining data pipelines using tools like Flink, Spark, Kafka, and Airflow.
    • Deep understanding of SQL and NoSQL ecosystems (e.g., Postgres, Redis, Elastic, Delta Lake).
    • Solid backend development experience, with a strong command of OOP/OOD principles and design patterns.
    • Demonstrated experience designing and implementing new data architectures, especially in fast-paced or transitioning environments.
    • Exposure to MLOps and the full lifecycle of AI/ML model deployment in production.
    • Passion for learning and applying new technologies–whether a new stack or a paradigm shift.
    • Experience in DevOps and asynchronous systems; familiarity with RabbitMQ, Docker, WebSockets, and Linux environments is a plus.
    • Comfortable taking initiative and working independently with minimal structure.
    • Advantageous: experience with routing and navigation algorithms.
    More
  • Β· 51 views Β· 2 applications Β· 12d

    Senior Data Engineer to $5000

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    The Client is dedicated to curating premium live event experiences with the goal of creating memories that will last a lifetime for their guests. With over two decades of experience, the company has redefined luxury hospitality with unsurpassed access,...

    The Client is dedicated to curating premium live event experiences with the goal of creating memories that will last a lifetime for their guests. With over two decades of experience, the company has redefined luxury hospitality with unsurpassed access, trusted VIP service, and expertise in event planning, travel, hospitality, and corporate ticket sales, providing seamless convenience, comfort, and elevated entertainment for both personal and corporate clients.

     

    Requirements:

    • Strong SQL skills with a focus on cloud database experience (i.e., Snowflake, Redshift, BigQuery).
    • Data modeling skills with a strong preference for DBT.
    • Background in building, maintaining, and supporting data pipelines.
    • Experience with BI tooling (i.e., Tableau, Power BI, Looker).
    • Experience with Python and data orchestration tooling (i.e., Airflow, Prefect).

       

    Responsibilities:

    • This person would sit embedded with our data product team at TKO, and work on supporting our travel business unit (hotels/flights/cost) with data engineering, data modeling and BI report support.
    • Expected work would be 20% requirements gathering, 50% data modeling using DBT/SQL, 30% engineering data pipelines to collaborate with BI Engineers and Data Analysts to support their data models.

       

    Details about the customer:

    Working hours till 8/9 pm

     

    We offer:

    • Annual paid vacation of 18 working days.
    • Extra vacation days for long-lasting cooperation.
    • Annual paid sick leave of 10 days.
    • Maternity/Paternity leave.
    • The opportunity for sabbatical leave.
    • Marriage and Parenthood Package.
    • Compensation for sports activities (up to 250$ per year) or health insurance covering (70%) β€” after the trial period.
    • Internal education (corporate library, Udemy courses).
    • Career development plan.
    • English and Spanish classes.
    • Paying taxes and managing PE (Private Entrepreneur).
    • Technical equipment.
    • Internal Referral program.
    • Opportunity to take part in company volunteering activities.
    • Sombra is a β€œFriendly to Veterans” award-holder.
    More
  • Β· 27 views Β· 2 applications Β· 16d

    Lead Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience
    Job overview: We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring...

    Job overview:
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field 
    • At least 1+ years of experience as Lead\Architect 
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

       

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

     

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications

     

    Ready to try your hand? Send your CV without a doubt!

    More
  • Β· 22 views Β· 3 applications Β· 9d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    Job Description Product: We’re a forward-thinking startup dedicated to tackling toxic narratives and disinformation across social media. Powered by cutting-edge AI and real-time analytics, we help Fortune 500 companies stay ahead by identifying and...

    Job Description

     

    Product:

     

    We’re a forward-thinking startup dedicated to tackling toxic narratives and disinformation across social media. Powered by cutting-edge AI and real-time analytics, we help Fortune 500 companies stay ahead by identifying and addressing harmful content as it emerges.

     

    About the role:

     

    We're seeking a Senior Software Engineer with strong data expertise to design, implement, and improve our backend systems that power its narrative intelligence platform. Your role will involve working collaboratively to impact our product and service capabilities, and contributing to our growth and success in the narrative intelligence space.

    As an early-stage startup, there's the fast-pace that comes with it: we make rapid decisions, shift priorities quickly, and you'll often be figuring things out as we scale.

    We value team players who enjoy sharing knowledge and ideas. We’re looking to continue growing as a strong teamβ€”whether through mentoring, tracking shared and individual progress, or ensuring that everyone plays an active role in making key decisions.

     

    Team: 25 engineers and 75 people in total within a product.

     

     

    πŸ›  Desired Skills and Experience

     

    • 6+ years of experience in building data analytics infrastructure.
    • 4+ years of experience with FastAPI
    • Great knowledges of data algorithms
    • Great comprehension of distributed systems and cloud-oriented development (AWS experience is a preference).
    • Ability to deliver absolutely clean, well tested code.
    • Very strong understanding of large-scale database systems.
    • Excellent problem-solving, analytical, and teamwork skills.
    • Fluent in spoken and written English (B2 at least)
    • Passion for the AI industry and experience with its tech.
    • Proven background in working on products/start-ups/companies from Israel is a MUST;
    More
  • Β· 11 views Β· 2 applications Β· 11h

    Senior ML Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Senior ML/GenAI Engineer - Remote Role Responsibilities: Develop AI Agents, tools for AI Agents, API as a service Prepare development and deployment documentation Participate in R&D activities of Data Science team Required Skills & Experience: 5+ years...

    Senior ML/GenAI Engineer - Remote

     

    Role Responsibilities:

    • Develop AI Agents, tools for AI Agents, API as a service
    • Prepare development and deployment documentation
    • Participate in R&D activities of Data Science team

    Required Skills & Experience:

    • 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
    • 5+ years of experience in software development in Python
    • Hand-on experience with LLM, RAG and AI Agents development
    • Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI 
    • Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
    • Knowledge of SQL, non-SQL and vector databases
    • Understanding of embedding vectors  and semantic search
    • Proficiency in Git (Bitbucket) and Docker
    • Upper-Intermediate (B2+) or higher level of English

    Would a Plus:

    • Hand-on experience with SLM and LLM fine-tuning
    • Education in Data Science, Computer Science, Applied Math or similar
    • AWS certifications (AWS Certified ML or equivalent)
    • Experience with TypeSense
    • Experience with speech recognition, speech-to-text ML models

    What We Offer:

    • Career growth with an international team.
    • Competitive salary and financial stability.
    • Flexible working hours (Mon-Fri, 8 hours).
    • Free English courses and a budget for education


     

    More
  • Β· 39 views Β· 1 application Β· 10d

    Senior BI Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B1 - Intermediate
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

     

    We are looking for a Senior BI Engineer to join our BI SB Team.

    Requirements:

    β€” At least 5 years of experience in designing and creating modern data integration solutions.

    β€” Leading the BI SB Team.

    β€” People management and task definition skils must have.
    β€” Master’s degree in Computer Science or a related field.
    β€” Proficient in Python and SQL, particularly for data engineering tasks.
    β€” Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
    β€” Expirience with DBT framework and AirFlow orchestration.
    β€” Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
    β€” Expirience with SnowFlake.
    β€” Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
    β€” Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.

    β€” Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).

     

    Would be a plus:

    β€” Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).

    β€” Experience with tools like GitHub, GitLab, Bitbucket.

    β€” Experience with real-time data processing (e.g., Kafka, Flink).
    β€” Familiarity with orchestration tools like Airflow, Luigi.
    β€” Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
    β€” Knowledge of data security and privacy practices.

     

    Responsibilities:

    β€” Design, construct, install, test, and maintain highly scalable data management systems.
    β€” Develop ETL/ELT processes and frameworks for data transformation and load.

    β€” Implement, optimize and support reports for Sportsbook domain 
    β€” Ensure efficient storage and retrieval of big data.
    β€” Optimize data retrieval and query performance.
    β€” Work closely with data scientists and analysts to provide data solutions and insights.

     

    We can offer:

    β€” 30 days of paid vacation and sick days β€” we value rest and recreation. We also comply with the national holidays.

    β€” Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.

    β€” Remote work; after Ukraine wins the war β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).

    β€” Flexible work schedule β€” we expect a full-time commitment but do not track your working hours.

    β€” Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

     

     

     

    More
  • Β· 20 views Β· 4 applications Β· 1d

    Middle BI Engineer

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

     

    We are looking for a Middle BI Engineer to join our BI SB Team.
     

    Requirements:

    β€” At least 2 years of experience in designing and creating modern data integration solutions.
    β€” Master’s degree in Computer Science or a related field.
    β€” Proficient in Python and SQL, particularly for data engineering tasks.
    β€” Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
    β€” Experience with DBT framework and AirFlow orchestration.
    β€” Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
    β€” Experience with SnowFlake.
    β€” Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
    β€” Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.

    β€” Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).
     

    Would be a plus:

    β€” Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).

    β€” Experience with tools like GitHub, GitLab, Bitbucket.

    β€” Experience with real-time data processing (e.g., Kafka, Flink).
    β€” Familiarity with orchestration tools like Airflow, Luigi.
    β€” Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
    β€” Knowledge of data security and privacy practices.

    Responsibilities:

    β€” Design, construct, install, test, and maintain highly scalable data management systems.
    β€” Develop ETL/ELT processes and frameworks for data transformation and load.

    β€” Implement, optimize and support reports for Sportsbook domain 
    β€” Ensure efficient storage and retrieval of big data.
    β€” Optimize data retrieval and query performance.
    β€” Work closely with data scientists and analysts to provide data solutions and insights.
     

    We can offer:

    β€” 30 days of paid vacation and sick days β€” we value rest and recreation. We also comply with the national holidays.

    β€” Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.

    β€” Remote work; after Ukraine wins the war β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).

    β€” Flexible work schedule β€” we expect a full-time commitment but do not track your working hours.

    β€” Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

     

     

     

    More
  • Β· 128 views Β· 29 applications Β· 20d

    Middle Data Engineer (Healthcare domain)

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Sigma Software is looking for a motivated Data Engineer to join our expanding engineering team. If you want to work in a close-knit team of Data Engineers solving complex problems using advanced data collection, transformation, analysis, and monitoring,...

    Sigma Software is looking for a motivated Data Engineer to join our expanding engineering team.

     

    If you want to work in a close-knit team of Data Engineers solving complex problems using advanced data collection, transformation, analysis, and monitoring, then this opportunity is for you.

     

    We look forward to having you on our team!

     

    Customer

    Our client is a leading medical technology company. The portfolio of products, services, and solutions is at the center of clinical decision-making and treatment pathways. Patient-centered innovation has always been and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where patients live or what they face. The client is innovating sustainably to provide healthcare for everyone, everywhere.

     

    Project

    The project’s mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.

     

    Requirements

    • Experience in data engineering and with cloud computing services solutions in the area of data and analytics, preferably with Azure
    • Conceptual knowledge of data analysis fundamentals, e.g., dimensional modeling, ETL, reporting tools, data governance, data warehousing, structured and unstructured data
    • Knowledge of SQL and experience with the Python programming language
    • Excellent communication skills and fluency in business English
    • Understanding Big Data databases, such as Snowflake, BigQuery, etc. Snowflake is preferred
    • Experience with database development and data modeling, ideally with Databricks or Spark

     

    Responsibilities

    • Implement architecture based on Azure cloud platforms (Data Factory, Databricks, Event Hub)
    • Design, develop, optimize, and maintain squad-specific data architecture and pipelines that adhere to defined ETL and Data Lake principles
    • Discover, understand, and organize disparate data sources and structure them into clean data models with clear, understandable schemas
    • Contribute to evaluating new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans for analytical data engineering skills, standards, and processes
    More
  • Β· 134 views Β· 24 applications Β· 9d

    Senior Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· B2 - Upper Intermediate
    Unleash your ambition and lead a team as a Senior Data Engineer! We are looking for a professional who not only has impressive data engineering experience but is also ready to take on a leadership role. Join us, and you will have the opportunity to...

    Unleash your ambition and lead a team as a Senior Data Engineer! We are looking for a professional who not only has impressive data engineering experience but is also ready to take on a leadership role. 

    Join us, and you will have the opportunity to innovate, lead the team, improve our solutions, and help us reach new heights! 

    If you are interested in this position, submit your CV now. 

    CUSTOMER
    Our client is a leading medical technology company. Its portfolio of products, services, and solutions is focused on clinical decision-making and treatment pathways. Patient-centered innovation has always been, and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where people live or what problems they face. The client innovates sustainably to provide healthcare for everyone, everywhere.

    PROJECT
    The project’s mission is to enable healthcare providers to increase their value by providing them with innovative technology and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, digital health, and enterprise services. 

     

    Job Description

     

    • Work closely with the client (PO) and other team leads to clarify technical requirements and expectations 
    • Coordinate and supervise your team (up to 4 team members), track performance, and provide support where TL's support is required
    • Implement architectures based on Azure cloud platforms (Data Factory, Databricks, and Event Hub)
    • Design, develop, optimize, and maintain squad-specific data architectures and pipelines that adhere to defined ETL and Data Lake principles
    • Discover, analyze, and organize disparate data sources and structure them into clean data models with clear, understandable schemas
    • Contribute to evaluating new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans related to analytical data engineering skills, standards, and processes 

     

    Qualifications

     

    • Experience with data engineering and cloud computing services and solutions in the field of data and analytics. Azure is preferable 
    • Conceptual knowledge of data analytics fundamentals, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data
    • Knowledge of SQL and experience with at least one programming language (Python or Scala)
    • Understanding of big data databases such as Snowflake, BigQuery, etc. Snowflake is preferable
    • Experience with database development and data modeling, ideally with Databricks or Spark
    • Fluency in business English 

     

    PERSONAL PROFILE

    • Excellent communication skills 

     

    More
  • Β· 69 views Β· 10 applications Β· 4d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...

    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.


    About us: 

    Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and human‑behavior analytics to help organizations make smarter decisions and boost performance.

    Our experts provide end‑to‑end solutionsβ€”from data engineering and advanced analytics to DevOpsβ€”ensuring scalable, secure, and AI‑ready platforms that turn insights into action.

     

    Role Overview

    As a Data Engineer at Opsfleet, you will lead the entire data lifecycleβ€”gathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with built‑in quality checks, governance, and observability. You’ll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.

     

    What You’ll Do

    * E2E Solution Delivery: Lead the full spectrum of data projectsβ€”requirements gathering, data ingestion, modeling, validation, and production deployment.

    * Data Modeling: Develop and maintain robust logical and physical data modelsβ€”such as star and snowflake schemasβ€”to support analytics, reporting, and scalable data architectures.

    * Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.

    * ML Engineering: Implement and manage model‑serving pipelines using cloud’s MLOps toolchain, ensuring reliability and monitoring in production.

    * Collaboration & Research: Partner with cross‑functional teams to prototype solutions, identify new opportunities, and drive continuous improvement.

     

    What We’re Looking For

    Experience: 4+ years in a data‑focused role (Data Engineer, BI Developer, or similar)

    Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problem‑solving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.

    Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.

     

    Nice to Have

    Cloud & Data Warehousing: Hands‑on experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.


     

    More
Log In or Sign Up to see all posted jobs