Jobs

104
  • Β· 18 views Β· 0 applications Β· 11d

    Data scientist with Java expertise

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several Product Teams focused on...

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

    • Responsibilities:

      We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
      - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
      - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
      - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
      - Integrate ML models and search capabilities into production systems.
      - Evaluate, fine-tune, and monitor search performance metrics.
      - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
      - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

    • Mandatory Skills Description:

      - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
      - Strong programming experience in both Java and Python (production-level code, not just prototyping).
      - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
      - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
      - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
      - Experience deploying and maintaining ML/search systems in production.
      - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

    • Nice-to-Have Skills Description:

      - Experience of work in distributed teams, with US customers
      - Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
      - Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
      - Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
      - Experience with MLOps and model monitoring tools.
      - Contributions to open-source search or ML projects.

    More
  • Β· 51 views Β· 20 applications Β· 10d

    Data Scientist

    Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).

     

     

    More
  • Β· 41 views Β· 15 applications Β· 9d

    Machine Learning Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    Overview: We are looking for a Senior ML Engineer with extensive experience in neural networks (NNs), LSTM architectures, and prescriptive modeling. A background in hydrology or environmental modeling is a strong plus. The engineer will develop and...

    Overview:
    We are looking for a Senior ML Engineer with extensive experience in neural networks (NNs), LSTM architectures, and prescriptive modeling. A background in hydrology or environmental modeling is a strong plus. The engineer will develop and optimize simulation and decision-support models to forecast and mitigate floods, droughts, and compound extreme events.

     

    Key Responsibilities:
    β€’ Predictive Modeling: Design and train neural network models (LSTM, RNN, CNN) for hydrological forecasting and time-series analysis of basin or climate data
    β€’ Prescriptive Modeling & Simulation: Develop simulation and optimization models using mass balance equations and hydrological process representations
    β€’ Scenario Analysis: Build models to predict floods, droughts, and extreme events; benchmark simulations against historical and observed datasets
    β€’ Optimization & Decision Support: Apply simulation/optimization libraries such as Pyomo, Gurobi, SimPy, and metaheuristic approaches for decision-making under uncertainty
    β€’ Geospatial Integration: Integrate models with geospatial and climate data sources to enable real-time scenario simulations
    β€’ Explainability & Uncertainty: Embed explainability and uncertainty quantification layers into model outputs to enhance stakeholder trust and interpretability
    β€’ Transparency & Documentation: Ensure comprehensive model documentation, benchmarking procedures, and reproducibility for scientific and stakeholder validation

    Required Qualifications:
    β€’ Strong background in Neural Networks (NNs), LSTM, and time-series modeling
    β€’ Proven experience with simulation and optimization frameworks (Pyomo, Gurobi, SimPy, or metaheuristics)
    β€’ Knowledge of mass balance modeling, hydrological processes, and climate data systems
    β€’ Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn)
    β€’ Familiarity with geospatial data integration (GIS, raster/vector datasets)
    β€’ Experience implementing model explainability and uncertainty assessment
    β€’ Excellent analytical and documentation skills for model transparency and validation

     

    Nice to have:
    β€’ Background in hydrology, water resources, or climate modeling
    β€’ Experience with prescriptive analytics and decision-support systems for environmental domains

    More
  • Β· 51 views Β· 22 applications Β· 9d

    Machine Learning Engineer

    Full Remote Β· Worldwide Β· Product Β· 4 years of experience Β· B1 - Intermediate
    Project: Advanced Simulation & Decision-Support for Hydrology Location: Remote Start: ASAP About the role We’re hiring a Senior ML Engineer to build forecasting and decision-support models for floods, droughts, and compound extreme events. You’ll design...

    Project: Advanced Simulation & Decision-Support for Hydrology
    Location: Remote
    Start: ASAP

     

    About the role

    We’re hiring a Senior ML Engineer to build forecasting and decision-support models for floods, droughts, and compound extreme events. You’ll design neural models (LSTM/RNN/CNN), develop prescriptive simulations (mass balance & hydrological processes), and deliver transparent, benchmarked results stakeholders can trust.

     

    Responsibilities

    • Predictive modeling: Design, train, and tune NN models (LSTM/RNN/CNN) for hydrological time series and basin/climate data.
    • Prescriptive modeling & simulation: Implement mass-balance and process-based models for system behavior and what-if scenarios.
    • Scenario analysis: Forecast floods/droughts/extremes; benchmark against historical & observed datasets.
    • Optimization & decision support: Use Pyomo, Gurobi, SimPy and metaheuristics for robust decisions under uncertainty.
    • Geospatial integration: Ingest GIS, raster/vector, and climate datasets for real-time simulations.
    • Explainability & uncertainty: Add XAI and uncertainty quantification layers to outputs.
    • Transparency & docs: Maintain clear documentation, benchmarks, and reproducible pipelines.

     

    Requirements

    • Solid track record with Neural Networks, LSTM, and time-series modeling.
    • Hands-on with Pyomo / Gurobi / SimPy (or metaheuristic optimization).
    • Knowledge of mass-balance modeling, hydrological processes, and climate data systems.
    • Python + ML stack: TensorFlow / PyTorch / scikit-learn.
    • Geospatial skills (GIS; raster/vector data).
    • Experience with explainability (XAI) and uncertainty assessment.
    • Strong analytical writing and documentation.

     

    Nice to have

    • Background in hydrology / water resources / climate modeling.
    • Experience with prescriptive analytics & decision-support for environmental domains.

    Tech stack (core)

    Python, PyTorch/TensorFlow, scikit-learn, Pyomo, Gurobi, SimPy, NumPy/Pandas, xarray, rasterio, GDAL, GeoPandas, MLflow/DVC, Docker.

     

    What we offer

    • Impactful work in climate & water-risk mitigation.
    • Freedom to shape modeling approaches and benchmarks.
    • Support for publications, conferences, and open science where possible.
    • Flexible hybrid setup.

     

     

    More
  • Β· 20 views Β· 0 applications Β· 9d

    Big Data Architect

    Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper Intermediate
    Our Client is a world-leading manufacturer of premium quality β€œsmart” beds designed to help answer questions around not being able to sleep at night through innovative technologies and digitalized solutions. Our client is a fast-moving, highly technical...

    Our Client is a world-leading manufacturer of premium quality β€œsmart” beds designed to help answer questions around not being able to sleep at night through innovative technologies and digitalized solutions.

    Our client is a fast-moving, highly technical team of people with the ambitious goal of bringing people better health and well-being through the best possible sleep experience, aiming to be the leader in sleep. The product combines established expertise in creating comfortable, adjustable beds with the latest in sleep science, cutting-edge sensor technology, and data processing algorithms.

    The Role:

    As a Big Data Architect, you will be responsible for the leadership and strategic innovation related to the data platform and services, help guide our solution strategies, and develop new technologies and platforms.

    We are looking for individuals who have a desire to architect and the ability to rapidly analyze use cases, design technical solutions that meet business needs while adhering to existing standards and postures, and lead multiple technical teams during implementation. Successful candidates will have excellent written and oral communication skills in English; will be comfortable explaining technical concepts to a wide range of audiences, including senior leadership and has deep understanding of modern big data architecture and design practices, patterns and tools.

     

    Job Description

    • Total of 10+ years of development/design/architecting experience with a minimum of 5 years experience in Big Data technologies on-prem or on cloud.
    • Experience with architecting, building, implementing and managing Big Data platforms On Cloud, covering ingestion (Batch and Real time), processing (Batch and Realtime), Polyglot Storage, Data Analytics and Data Access
    • Good understanding of Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog
    • Proven understanding and demonstrable implementation experience of big data platform technologies on cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics etc
    • Experience working with Enterprise Data Warehouse technologies, Multi-Dimensional Data Modeling, Data Architectures or other work related to the construction of enterprise data assets
    • Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring
    • Experience building stream-processing systems, using solutions such as Apache Spark, Databricks, Kafka etc...
    • Experience with Spark/Databricks technology is a must
    • Experience with Big Data querying tools
    • Solid skills in Python
    • Strong experience with data modelling and schema design
    • Strong SQL programming background
    • Excellent interpersonal and teamwork skills
    • Experience to drive solution/enterprise-level architecture, collaborate with other tech leads
    • Strong problem solving, troubleshooting and analysis skills
    • Experience working in a geographically distributed team
    • Experience with leading and mentorship of other team members
    • Good knowledge of Agile Scrum
    • Good communication skills

     

    Job Responsibilities

    Work directly with the Client teams to understand the requirements/needs and rapidly prototype data and analytics solutions based upon business requirements
    Architect, Implement and manage large scale data platform/applications including ingestion, processing, storage, data access, data governance capabilities and related infrastructure
    Support Design and development of solutions for the deployment of data analytics notebooks, tools, dashboards and reports to various stakeholders
    Communication with Product/DevOps/Development/QA team
    Architect data pipelines and ETL/ELT processes to connect with various data sources
    Design and maintain enterprise data warehouse models
    Take part in the performance optimization processes
    Guide on research activities (PoC) if necessary
    Manage cloud based data & analytics platform
    Establishing best practices with CI\CD under Big Data scope

    More
  • Β· 53 views Β· 11 applications Β· 9d

    Middle Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B1 - Intermediate
    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Middle Data Scientist...

    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Middle Data Scientist to play a key role in reshaping the way we provide our solutions.
     

    You could be a perfect match for the position if

    You want to:

    • Validate datasets to ensure data accuracy and consistency.
    • Design proof-of-concept (POC) solutions to explore new approaches.
    • Develop technical solutions by mapping requirements to existing tools and functionalities.
    • Train models and create custom approaches for new domains.
    • Troubleshoot data processing and model performance issues.

    You have:

    • 2+ years of experience in data science or a related field.
    • Strong SQL skills for data manipulation and extraction.
    • Proficiency in Python, with the ability to write modular and readable code for experiments and prototypes.
    • A solid mathematical background, preferably in a Computer Science-related field.
    • Expertise in scientific Python libraries, including NumPy, pandas, scikit-learn, and either Keras/TensorFlow or PyTorch.
    • Familiarity with Time Series Forecasting methodologies.
    • Experience in statistical testing, including A/B testing.
    • 1+ years working with tabular and multimodal data (e.g., combining tabular data with text, audio, or images).
    • Upper-intermediate or higher English level.

    Soft skills:

    • Analytical mindset and critical thinking to solve complex problems.
    • Agile approach, with the ability to experiment and test hypotheses in a dynamic business environment.
    • Business-oriented thinking, capable of translating complex models into clear business insights.
    • Curiosity and a drive for continuous learning in the data domain.
    • Strong team player, able to collaborate across cross-functional teams.

    You’re gonna love it, and here’s why:

    • Rich innovative software stack, freedom to choose the best suitable technologies.
    • Remote-first ideology: freedom to operate from the home office or any suitable coworking.
    • Flexible working hours (we start from 8 to 11 am) and no time tracking systems on.
    • Regular performance and compensation reviews.
    • Recurrent 1-1s and measurable OKRs.
    • In-depth onboarding with a clear success track.
    • Competera covers 70% of your training/course fee.
    • 20 vacation days, 15 days off, and up to one week of paid Christmas holidays.
    • 20 business days of sick leave.
    • Partial medical insurance coverage.
    • We reimburse the cost of coworking.

    Drive innovations with us. Be a Competerian.

    More
  • Β· 45 views Β· 8 applications Β· 9d

    Junior ML Engineer

    Full Remote Β· Ukraine Β· 1.5 years of experience Β· B2 - Upper Intermediate
    We’re looking for a Junior ML Engineer to join our client’s R&D team and grow your expertise in bringing modern ML solutions into production. 1.5+ years of experience working with ML models or data pipelines in Python Familiarity with the ML lifecycle,...

    We’re looking for a Junior ML Engineer to join our client’s R&D team and grow your expertise in bringing modern ML solutions into production.
     

    • 1.5+ years of experience working with ML models or data pipelines in Python
    • Familiarity with the ML lifecycle, from data preprocessing and feature engineering to training and evaluation
    • Practical experience with ML libraries such as scikit-learn, pandas, NumPy (experience with PyTorch or TensorFlow is a plus)
    • Knowledge of MLOps concepts (MLflow, SageMaker, Kubeflow, or similar tools)
    • Understanding of cloud platforms (AWS/GCP/Azure) and containerization (Docker)
    • Experience with Git, unit testing, and CI/CD workflows (even on a small scale)
    • English: Upper-Intermediate or higher

       

    Nice to have:

    • Exposure to real-time model monitoring, model drift, or A/B testing
    • Familiarity with data pipeline orchestration tools (Airflow, Prefect, Dagster)
    • Understanding of distributed systems (Spark, Ray) or vector databases

       

    Responsibilities

    • Support the design and development of ML pipelines for classical ML and LLM-based models
    • Contribute to data preprocessing, feature extraction, and model training workflows
    • Assist in deploying ML models to cloud environments (AWS/GCP/Azure) using Docker or similar tools
    • Help maintain monitoring and logging for model accuracy and performance
    • Collaborate with data scientists and backend engineers to integrate ML solutions into production
    • Learn and apply MLOps best practices, reproducibility, CI/CD, and monitoring
    • Participate in code reviews and continuous improvement of engineering processes
    More
  • Β· 16 views Β· 0 applications Β· 7d

    Lead Data Scientist

    Full Remote Β· Ukraine, Romania Β· 7 years of experience Β· C1 - Advanced
    Be part of a pioneering initiative to build the next generation of closed-loop, causal knowledge-generating reinforcement learning systems - driven by proprietary and patented algorithmic methods. Our advanced causal inference and reinforcement learning...

    Be part of a pioneering initiative to build the next generation of closed-loop, causal knowledge-generating reinforcement learning systems - driven by proprietary and patented algorithmic methods.

     

    Our advanced causal inference and reinforcement learning algorithms are already transforming operations within a Fortune 500 company, powering applications that span e-commerce content generation and targeting, as well as large-scale factory optimization and control.

     

    In this project, you’ll help push the boundaries of adaptive learning and real-time decision-making, creating a modular, reusable codebase designed for flexible deployment across diverse, high-impact domains.

     

    A key part of this work involves developing synthetic data generation frameworks to rigorously test and refine algorithmsβ€”ensuring they perform with the robustness and intelligence expected in real-world environments.

     

    Requirements:

     

    • 7+ years of experience in designing and implementing statistical computing and reinforcement learning algorithms for real-world systems.
    • Expertise in reinforcement learning (RL), including multi-armed bandits, contextual bandits.
    • Strong background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
    • Proficiency in programming languages for machine learning and statistical computing, particularly Python.
    • Experience in synthetic data generation, including stochastic process simulations.
    • Excellent problem-solving, communication, and collaboration skills, with the ability to work in a fast-paced research and development environment.
    • Bachelor’s, Master’s or Ph.D. in Statistics, Applied Mathematics, Machine Learning, or a related field.

     

    Preferred:

     

    • Background in experimental design for real-time decision-making.
    • Familiarity and experience with bandit-based and reinforcement learning techniques such as Thompson Sampling, LinUCB, and Monitored UCB for dynamic decision-making.

     

    Job responsibilities:

     

    • Collaborate with other scientists to ideate and implement proprietary reinforcement learning models for causal model-based control and adaptive optimization.
    • Develop and optimize statistical models for causal inference and real-time decision- making in dynamic environments.
    • Create synthetic data generation systems that accurately simulate real-world problem spaces for training and testing.
    • Develop multi-objective optimization algorithms to balance competing trade-offs in various applications.
    • Enhance adaptive learning approaches that enable the system to self-tune and self-correct based on environmental feedback.
    • Validate and refine learning algorithms using synthetic and real-world datasets.

     

    More
  • Β· 17 views Β· 0 applications Β· 7d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· 5 years of experience Β· C1 - Advanced
    We are looking for experienced and visionary Senior Data Scientist to join our client’s advanced research and development team. Our customer is a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, the...

    We are looking for experienced and visionary Senior Data Scientist to join our client’s advanced research and development team. Our customer is a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, the company produces over 60,000 innovative products, spanning adhesives, abrasives, laminates, personal protective equipment, window films, paint protection film, electrical and electronic components, car-care products, electronic circuits, and optical films.

     

    In this role, you will lead the design, development, and deployment of cutting-edge machine learning models and statistical algorithms to solve some of the company’s most complex business challenges. You will apply your deep expertise in statistical computing, natural language processing, and reinforcement learning to create real-world systems that deliver measurable value.

     

    As a senior team member, you will also shape technical strategy, mentor colleagues, and foster a culture of rigorous experimentation and data-driven excellence, ensuring that innovation translates into tangible impact across the organization. 

     

    Requirements:

    • A Master’s or PhD in Statistics, Applied Mathematics, Machine Learning, Computer Science, or a related quantitative field.
    • 5+ years of hands-on experience designing and implementing statistical computing, NLP, and/or reinforcement learning algorithms for real-world systems.
    • Strong theoretical and practical background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
    • Expert-level proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch).
    • Proven experience in synthetic data generation, including stochastic process simulations.
    • Excellent problem-solving abilities with a creative and analytical mindset.
    • Strong communication and collaboration skills, with a proven ability to present complex results to diverse audiences and thrive in a fast-paced R&D environment.

     

    Preferred Qualifications & Skills:

    • Industry experience in Consumer Packaged Goods (CPG) or a related field.
    • Experience contributing to or developing enterprise data stores (e.g., data meshes, lakehouses).
    • Knowledge of MLOps, DevOps methodologies, and CI/CD practices for deploying and managing models in production.
    • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience working with and consuming data from REST APIs.

     

    Job responsibilities:

    • Model Development & Implementation: Lead the end-to-end lifecycle of machine learning projects, from problem formulation and data exploration to designing, building, and deploying advanced statistical, NLP, and/or reinforcement learning models in production environments.
    • Advanced Statistical Analysis: Apply a strong background in statistical computing to design and execute complex experiments (including A/B testing, fractional factorial design, and response surface methodology) to optimize systems and products.
    • Algorithm & Solution Design: Architect and implement novel algorithms for multi-objective optimization and synthetic data generation, including stochastic process simulations, to solve unique business challenges.
    • Technical Leadership & Mentorship: Provide technical guidance and mentorship to junior and mid-level data scientists, fostering their growth through code reviews, knowledge sharing, and collaborative problem-solving.
    • Cross-Functional Collaboration: Work closely with product managers, engineers, and business stakeholders to identify opportunities, define project requirements, and translate complex scientific concepts into actionable business insights.
    • Research & Innovation: Stay at the forefront of the machine learning and data science fields, continuously researching new techniques and technologies to drive innovation and maintain our competitive edge.

     

    More
  • Β· 50 views Β· 8 applications Β· 6d

    Game Mathematician

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· B1 - Intermediate
    We are seeking a skilled and innovative Mathematician to join our team. In this role, you will design and develop mathematical models and algorithms that power and improve our existing games. Key Responsibilities: – Design and implement mathematical...

    We are seeking a skilled and innovative Mathematician to join our team. In this role, you will design and develop mathematical models and algorithms that power and improve our existing games.

     

     

    Key Responsibilities:

     

    – Design and implement mathematical models for slot machines and lotteries
    – Develop models for marketing-related calculations and analytics
    – Analyze game data and provide actionable insights to improve game mechanics and player engagement
    – Collaborate closely with software developers, game designers, and other team members to optimize game math
    – Stay updated on industry trends and innovations in mathematics and statistics within online gambling
    – Contribute to the development and maintenance of technical and certification documentation related to game math

     

     

    What kind of professional are we looking for?

     

    – Advanced degree in Mathematics, Statistics, or a related field
    – Solid understanding of probability theory, statistical analysis, and algorithm development
    – Hands-on experience with tools such as MATLAB, R, or Python for modeling and data analysis
    – Strong problem-solving skills and the ability to work independently
    – Excellent communication and collaboration skills
    – High proficiency in Ukrainian or Russian

     

    We Offer:

     

     β€” You can work from any part of the world (remote work)

    β€” Working hours: 10:00/11:00 β€” 19:00/20:00, Mon-Fri

    β€” Job in an international company

    β€” The opportunity to join a fast-growing team of professionals and a cool product

    β€” Stable and decent salary (based on interview results)

    β€” An opportunity for career and professional growth

    β€” An opportunity to implement your ideas

    β€” Ability to implement ambitious projects

    β€” Promptness of decision-making, absence of bureaucracy

     

    More
  • Β· 61 views Β· 3 applications Β· 6d

    Head of Data

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Head of Data We are looking for a highly motivated and results-driven Head of Data to join our team full-time. In this strategic role, you will shape the vision, architecture, and delivery frameworks for Data Engineering, Data Science, Quant, and Data...

    Head of Data 

    We are looking for a highly motivated and results-driven Head of Data to join our team full-time. In this strategic role, you will shape the vision, architecture, and delivery frameworks for Data Engineering, Data Science, Quant, and Data Analytics, unifying these teams into a single, high-impact function. This is a key leadership position, guiding both technical and managerial directions for our data organization.

    We drive fintech innovation through deep analytical expertise and a data-first, engineering-driven approach

     

    Key Responsibilities

    Data & Quant Leadership

    • Define and lead the strategy across Data Engineering, Data Science, Quant, and Data Analytics, focusing on high business impact
    • Build and manage cross-functional teams, including quants, data scientists, engineers, and analysts
    • Drive advanced analytics, ML strategy, feature engineering, and model development with a focus on responsible AI
    • Collaborate with stakeholders to turn complex business challenges into data-driven solutions

    Data Engineering Leadership

    • Design and evolve a scalable, reliable, and maintainable data platform architecture
    • Oversee development of robust ETL/ELT pipelines and real-time data streams
    • Establish engineering best practices: code reviews, CI/CD, data contracts, observability
    • Drive technology selection and resource planning for ClickHouse, Spark, and supporting infrastructure
    • Ensure data quality through monitoring, alerting, SLA ownership, and remediation processes

    People & Cross-Functional Leadership

    • Communicate effectively across technical and non-technical teams to influence decisions
    • Lead cross-team processes (e.g., grooming, design reviews, retrospectives)
    • Mentor team members, create growth plans, and foster a culture of psychological safety, accountability, and innovation

     

    Requirements

    • 5+ years of hands-on experience across Data Science, Quant, Data Engineering, delivering end-to-end solutions
    • 3+ years of managerial experience, leading data teams including quants and data scientists
    • Bachelor’s or Master’s degree in Computer Science, Mathematics, Physics, Engineering, or a related field
    • Strong programming skills in Python, with experience writing clean, production-grade code
    • Solid understanding of software engineering best practices (CI/CD, testing, code reviews, clean architecture)
    • Practical experience with ML libraries and platforms (e.g., scikit-learn, XGBoost, PySpark)
    • Deep understanding of core ML algorithms: regression, gradient boosting, time series, etc.
    • Strong foundation in mathematical statistics, probability theory, and quantitative modeling
    • Proficient in SQL and experience with analytical and OLAP databases
    • English Upper-Intermediate

       

    Nice to Have

    • Experience with ClickHouse or other OLAP databases
    • Background in trading or fintech
    • Experience analyzing and modeling time series or high-frequency data
    • Familiarity with anti-fraud systems, risk modeling, or portfolio analytics

     

    We offer

    • Tax expenses coverage for private entrepreneurs in Ukraine
    • Expert support and guidance for Ukrainian private entrepreneurs
    • 20 paid vacation days per year
    • 10 paid sick leave days per year
    • Public holidays as per the company’s approved Public holiday list
    • Medical insurance
    • Opportunity to work remotely
    • Professional education budget
    • Language learning budget
    • Wellness budget (gym membership, sports gear and related expenses)
    More
  • Β· 49 views Β· 1 application Β· 6d

    Annotator (Machine Learning Computer Vision)

    Hybrid Remote Β· Ukraine (Lviv) Β· 1 year of experience Β· B2 - Upper Intermediate
    Job Description - High attention to detail and commitment to annotation accuracy. - Comfort working with 2D/3D medical imaging data. - Experience with or willingness to learn tools like 3D Slicer, Amazon SageMaker Ground Truth, CVAT or similar. - General...

    Job Description

    - High attention to detail and commitment to annotation accuracy.

    - Comfort working with 2D/3D medical imaging data.

    - Experience with or willingness to learn tools like 3D Slicer, Amazon SageMaker Ground Truth, CVAT or similar.

    - General understanding of Machine Learning & Computer Vision models idea would be a big plus.

    - Background in medicine, radiology, biomedical sciences, anatomy, or related healthcare fields is a big plus.

    - Basic understanding of QA Process and Methodologies would be a plus;

    - Experience working in healthcare AI, research, or imaging is a plus.

    - Strong communication skills and at least upper-intermediate English level for effective cooperation with client.

     

    Job Responsibilities

    We are looking for a detail-oriented Annotator to prepare the foundational data for our ML models. Your meticulous work will be crucial in ensuring the accuracy and quality of the datasets that will power this transformative change.

    Initially, you will focus on preparing data related to our highest-volume staples (one linear and one circular design). Your responsibilities will directly contribute to building a production-ready analytical solution that will revolutionize testing and quality assurance for surgical devices.

    Your responsibilities will include

    - Prepare datasets for training machine learning models that automatically detect information on medical device images.

    - Close work with a client team with planning work load, results demonstration and requirements clarification;

    - Analyze defects and problems in current algorithms based on video output;

    - Raise a flag when any defect occurs, verify if test sets are have acceptable output;

    - Quality Assurance: Follow strict medical annotation protocols provided by supervising doctors.

    - Collaboration: Work closely with medical experts, AI engineers, and fellow annotators in a multidisciplinary environment.

    Key Contribution: You will be the essential link between raw engineering data and the machine learning models that will automate and accelerate our product validation process.

     

    Department/Project Description

    We are launching a critical initiative to integrate advanced Artificial Intelligence (AI) and Machine Learning (ML) into the product development and validation process for high-volume surgical staples.

    This project aims to solve current process inefficienciesβ€”specifically, long analysis wait times (up to 4 weeks per test) and high vendor costsβ€”by bringing analytical capabilities in-house and automating the validation of millions of staples annually. This will lead to faster product development and significant cost savings.

    More
  • Β· 33 views Β· 1 application Β· 5d

    Machine Learning CV Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience
    Big product software company is looking for a Machine Learning CV Engineer. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions. REQUIREMENTS: - Over 3 years of...

    Big product software company is looking for a Machine Learning CV Engineer. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions.

     

    REQUIREMENTS:

    - Over 3 years of experience in Machine Learning;

    - 1+ year of Computer Vision experience;

    - Higher education;

    - Technical English (higher proficiency is a plus).

     

    COMPANY OFFERS:

    - Employment under a gig-contract with all taxes paid;

    - Flexible working hours;

    - 28 days of paid vacation + 15 days at your own request;

    - Paid sick leave;

    - Medical insurance (including dental and optical care) for the family;

    - Opportunity to become an inventor on international patents with paid bonuses;

    - Career and professional development opportunities;

    - Access to own base of courses and trainings;

    - Office in central Kyiv / remotely;

    - Provision of all necessary up-to-date equipment;

    - Regular salary reviews + financial bonuses (up to 100% of the salary);

    - Bonuses for wedding, childbirth, and other significant life events;

    - Paid maternity leave;

    - Paid lunches, tea, coffee, water, snacks;

    - Discounts on the company’s products and services.

     

    More
  • Β· 77 views Β· 8 applications Β· 5d

    AI / Data Scientist

    Full Remote Β· Ukraine Β· 1 year of experience Β· B2 - Upper Intermediate
    We’re looking for a hands-on AI / Data Scientist who enjoys solving complex problems with data and turning insights into impactful solutions. In this role, you’ll work on designing, building, and deploying models that extract intelligence from...

    We’re looking for a hands-on AI / Data Scientist who enjoys solving complex problems with data and turning insights into impactful solutions. In this role, you’ll work on designing, building, and deploying models that extract intelligence from large-scale, heterogeneous datasets β€” including network, behavioral, and textual data. You’ll collaborate closely with AI engineers, developers, and security experts to shape features, pipelines, and analytical tools that make the internet a safer place.

    Key Responsibilities:

    • Manipulate, clean, and transform complex, high-volume datasets for modeling and analysis
    • Design and implement feature engineering strategies that enhance model accuracy, interpretability, and robustness
    • Build and evaluate machine learning models for anomaly detection, fraud identification, forecasting, and optimization
    • Explore and prototype solutions using Python (Pandas, NumPy, Scikit-learn, PyTorch, TensorFlow) and SQL
    • Translate product or business requirements into clear analytical tasks and model development plans
    • Contribute to MLOps workflows for model deployment, validation, and continuous monitoring
    • Conduct error analysis and model tuning to ensure reliability and efficiency

    Main requirements:

    • 1+ years of hands-on experience in DS/ ML
    • Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, PyTorch, TensorFlow)
    • Proven ability to handle messy, large-scale, and multi-source data
    • English: Upper-Intermediate+, both written and spoken
    • Experience with classical machine learning algorithms and a solid understanding of their strengths and limitations
    • Familiarity with NLP techniques and modern deep learning approaches
    • Strong analytical mindset and ability to develop well-justified, data-driven solutions
    • Good command of SQL for data querying and preparation

    The benefits you will get:

    • Ability to influence processes and best practices
    • Opportunity to expand your technical background
    • Long-term cooperation with teammates and clients
    • Compensation of internal and external English language training
    • Flexible schedule
    • Paid vacation and sick leave
    • Our accountants will take care of the taxes
    • Legal support
    More
  • Β· 30 views Β· 1 application Β· 5d

    Senior Data Scientist to $9000

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· B2 - Upper Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role:

    As a Data Scientist you’ll play a critical role in shaping and enhancing our AI-driven pricing platform. 

     

    Key Responsibilities:

    • Develop and Optimize Advanced ML Models: Build, improve, and deploy machine learning and statistical models for forecasting demand, analyzing price elasticities, and recommending optimal pricing strategies.
    • Lead End-to-End Data Science Projects: Own your projects fully, from conceptualization and experimentation through production deployment, monitoring, and iterative improvement.
    • Innovate with Generative and Predictive AI Solutions: Leverage state-of-the-art generative and predictive modeling techniques to automate complex pricing scenarios and adapt to rapidly changing market dynamics.

    Required Competence and Skills:

    • A Master’s or PhD in Computer Science, Physics, Applied Mathematics or a related field, demonstrating a strong foundation in analytical thinking.
    • At least 5 years of professional experience in end-to-end machine learning lifecycle (design, development, deployment, and monitoring).
    • At least 5 years of professional experience with Python development, including OOP, writing production-grade code, testing, and optimization.
    • At least 5 years of experience with data mining, statistical analysis, and effective data visualization techniques.
    • Deep familiarity with modern ML/DL methods and frameworks (e.g., PyTorch, XGBoost, scikit-learn, statsmodels).
    • Strong analytical skills combined with practical experience interpreting model outputs to drive business decisions.

    Nice-to-Have:

    • Practical knowledge of SQL and experience with large-scale data systems like Hadoop or Spark.
    • Familiarity with MLOps tools and practices (CI/CD, model monitoring, data version control).
    • Experience in reinforcement learning and Monte-Carlo methods.
    • A solid grasp of microeconomic principles, including supply and demand dynamics, price elasticity, as well as econometrics.
    • Experience with cloud services and platforms, preferably AWS.
    More
Log In or Sign Up to see all posted jobs