Jobs
119-
Β· 53 views Β· 4 applications Β· 28d
Computer Vision Engineer
Full Remote Β· Worldwide Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate MilTech πͺDrones kill and injure over 5,000 Ukrainians every month. Munin exists to stop that. We are building hand-launched micro-missiles to intercept small drones at short range. Small enough to carry three in a vest and cheap enough for wide use, this will be...Drones kill and injure over 5,000 Ukrainians every month. Munin exists to stop that. We are building hand-launched micro-missiles to intercept small drones at short range. Small enough to carry three in a vest and cheap enough for wide use, this will be the smallest, most cost-effective guided missile ever deployed.
We work from a test site in Oslo, UK and in Kyiv, directly with Ukrainian brigades. Our goal is to turn the tide of war and give NATO a proven soldier-level counter-drone solution.
We are hiring a Computer Vision Engineer to join our founding team.
You will lead the design and development of the missileβs electronics system, from ideas to field tests and production. You will own the architecture, guide component selection, and drive integration while staying hands-on through bring-up, testing, and iteration.
Responsibilities
β’ Design system architecture and lead electronics development
β’ Select and integrate sensors, processors, power systems, and actuators
β’ Design PCBs
β’ Develop test plans for live environments and build test rigs
β’ Collaborate with mechanical and software engineers to ensure reliable, testable systems
Requirements
β’ Experience with embedded or mechatronic system design
β’ Strong PCB design and debugging skills
β’ Skilled in sensor integration and power management
β’ Able to balance high-level design with hands-on execution
β’ Willing to work from Oslo, UK or Kyiv
Bonus
- Aerospace or computer vision experience
- Knowledge of control systems or autonomy
- FPGA programming experience
Why Munin
β’ Impact: Build tech that saves lives
β’ Ownership: Lead seeker development for a new missile category
β’ Team: Join a mission-driven group from Stanford, Imperial, Rheinmetall, and special forces
β’ Benefits: Salary, 0β3% equity, housing, travel, and training support
This is not a research role. We are flying real hardware fast. If you are ready to build something that matters, apply.
More -
Β· 30 views Β· 6 applications Β· 27d
Machine Learning Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateTech Level: Senior English Level: Upper-Intermediate Employment Type: Full-time Time Zone: CET Start Date: ASAP Duration: 12+ months About the Project The team is responsible for building and improving address search and geocoding solutions that enable...Tech Level: Senior
English Level: Upper-Intermediate
Employment Type: Full-time
Time Zone: CET
Start Date: ASAP
Duration: 12+ months
About the ProjectThe team is responsible for building and improving address search and geocoding solutions that enable users to find accurate and relevant locations with ease. The focus areas include query understanding, geocoding, hybrid search, and enhancing prediction accuracy for pickup and drop-off locations through advanced ML models.
Project Phase: Ongoing
Key Responsibilities- Design ML systems end-to-end, including data analysis, annotation, and processing across multiple research areas or multimodal scenarios.
- Lead data preparation and computation across all development stages.
- Translate business goals and metrics into engineering and data science tasks.
- Own the entire data science product lifecycle β from pipeline development and implementation to A/B testing and production rollout.
- Create innovative ML-driven services to improve map accuracy and coverage.
- Work on search relevance optimization, personalization, and multilingual support within the Geo domain.
- Enhance route precision and ETA accuracy.
- Contribute to Geo data management β including ingestion pipelines, anomaly detection, and data reliability improvement.
- Perform geospatial analysis and extract actionable insights from location data.
Requirements- Bachelorβs degree in Statistics, Mathematics, Computer Science, Machine Learning, or a related field.
- 5+ years of hands-on experience as an ML Engineer, Applied Scientist, or Data Scientist.
- Strong background in Python and frameworks for streaming, batch, and asynchronous data processing.
- Proficiency with MLOps tools and lifecycle management of ML models.
- Experience designing and maintaining ML-powered services in production environments.
- Familiarity with Geo / Maps ML applications.
- Expertise in multiple ML domains (e.g., Computer Vision, NLP).
- Deep understanding of classical ML, deep learning, and underlying mathematical concepts.
- Solid experience in ML system design and MLOps practices for production.
- Strong grasp of software system design principles.
- Experience in experimental design and hypothesis validation.
- Awareness of security, risk, and control practices in production systems.
- (Plus) Knowledge of Golang, event-driven systems, or distributed deployment environments.
Tech StackPython, Go (nice to have), SQL, Kafka / Spark / Flink, PostGIS, ElasticSearch
Interview Process
- English proficiency check
- Internal technical interview
- Final technical interview with the project team
-
Β· 23 views Β· 3 applications Β· 26d
Senior Data Science Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateLevel: Senior English: Upper-Intermediate Employment type: Full-time Time zone: CET Start date: ASAP Duration: 12+ months Project Overview Youβll join an ongoing project focused on developing an advanced Recommendation System powered by AI and machine...Level: Senior
English: Upper-Intermediate
Employment type: Full-time
Time zone: CET
Start date: ASAP
Duration: 12+ months
Project OverviewYouβll join an ongoing project focused on developing an advanced Recommendation System powered by AI and machine learning. The goal is to deliver highly personalized user experiences through data-driven insights. The system analyzes large volumes of data to suggest relevant content and integrates seamlessly across multiple platforms, providing real-time, context-aware recommendations.
Team setup: 2 Python Developers, 1 Data Scientist, and 1 Project Manager
Soft Skills- Excellent communication and problem-solving skills
- Ability to work independently with minimal supervision
- Confidence in clarifying and refining requirements directly with stakeholders
Hard Skills / Must Have- 5+ years of professional experience in Data Science or related fields
- Solid expertise with Scikit-learn for data manipulation, model development, and evaluation
- Experience working with laser embeddings to generate text-based content representations
- Strong background in developing and enhancing recommendation models based on content similarity
- Proven experience in clustering models and optimizing clustering algorithms and workflows
Key Responsibilities- Analyze and enhance recommendation models based on content similarity
- Develop and optimize clustering models and their performance
- Improve the process of building similarity matrices
- Refine data structures to achieve better recommendation accuracy
Tech StackFlask, Pydantic, Gunicorn, Peewee, PostgreSQL, Sentry, Elastic, Scikit-learn, laser embeddings, concurrency and multithreading
Interview Process- English check β 15 minutes
- Internal technical interview β 1β1.5 hours
- Final technical interview with the client β 1 hour
-
Β· 17 views Β· 0 applications Β· 26d
Senior/Lead Data Scientist
Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper IntermediateDescription Join a cutting-edge initiative to develop a next-generation, closed-loop causal knowledge generating reinforcement learning system based on proprietary and patented algorithmic methods. These advanced causal inference and reinforcement...Description
Join a cutting-edge initiative to develop a next-generation, closed-loop causal knowledge generating reinforcement learning system based on proprietary and patented algorithmic methods. These advanced causal inference and reinforcement learning algorithms have already been deployed across multiple domains in a Fortune 500 company, powering applications ranging from e-commerce content generation and targeting to large-scale factory optimization and control.
This project will push the frontiers of adaptive learning and real-time decision-making, creating a modular, reusable codebase that can be flexibly redeployed across various high-impact systems. As part of this effort, the team will develop synthetic data generation systems to rigorously test and optimize these algorithms, ensuring they perform as if interacting with real-world environments.
Requirements
Required:
- Masterβs or Ph.D. in Statistics, Applied Mathematics, Machine Learning, or a related field.
- 7+ years of experience in designing and implementing statistical computing and reinforcement learning algorithms for real-world systems.
- Expertise in reinforcement learning (RL), including multi-armed bandits, contextual bandits.
- Strong background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
- Proficiency in programming languages for machine learning and statistical computing, particularly Python.
- Experience in synthetic data generation, including stochastic process simulations.
- Strong English (min. B2 level).
- Excellent problem-solving, communication, and collaboration skills, with the ability to work in a fast-paced research and development environment.
Preferred:
- Background in experimental design for real-time decision-making.
Familiarity and experience with bandit-based and reinforcement learning techniques such as Thompson Sampling, LinUCB, and Monitored UCB for dynamic decision-making.
Job responsibilities
- Collaborate with other scientists to ideate and implement proprietary reinforcement learning models for causal model-based control and adaptive optimization.
- Develop and optimize statistical models for causal inference and real-time decision- making in dynamic environments.
- Create synthetic data generation systems that accurately simulate real-world problem spaces for training and testing.
- Develop multi-objective optimization algorithms to balance competing trade-offs in various applications.
- Enhance adaptive learning approaches that enable the system to self-tune and self-correct based on environmental feedback.
- Validate and refine learning algorithms using synthetic and real-world datasets.
-
Β· 19 views Β· 0 applications Β· 26d
Data scientist with Java expertise
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateThe primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several Product Teams focused on...The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.Responsibilities:
We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
- Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
- Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
- Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
- Integrate ML models and search capabilities into production systems.
- Evaluate, fine-tune, and monitor search performance metrics.
- Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
- Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.Mandatory Skills Description:
- 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
- Strong programming experience in both Java and Python (production-level code, not just prototyping).
- Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
- Experience with Vector Databases, Embeddings, and Semantic Search techniques.
- Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
- Experience deploying and maintaining ML/search systems in production.
- Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).Nice-to-Have Skills Description:
- Experience of work in distributed teams, with US customers
- Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
- Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
- Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
- Experience with MLOps and model monitoring tools.
- Contributions to open-source search or ML projects.
-
Β· 59 views Β· 26 applications Β· 25d
Data Scientist
Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper IntermediateProject Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...Project
Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.Overview
We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.Key Responsibilities
- Apply statistical methods and machine learning techniques for outlier and anomaly detection.
- Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
- Extract, preprocess, and transform large datasets directly from SQL databases.
- Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
- Collaborate with business analysts to align analytical approaches with domain requirements.
- Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
- Ensure models are interpretable, scalable, and deliver actionable insights.
Requirements
- Strong foundation in statistics and probability theory.
- Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
- Proven experience with outlier/anomaly detection techniques.
- Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
- Strong SQL skills for working with large datasets.
- Ability to communicate findings effectively to both technical and non-technical stakeholders.
Nice to Have
- Experience with ML frameworks (TensorFlow, PyTorch).
- Familiarity with MLOps practices and model deployment.
- Exposure to logistics, supply chain, or financial data.
- Knowledge of cloud platforms (AWS, GCP, Azure).
More -
Β· 49 views Β· 20 applications Β· 24d
Machine Learning Engineer
Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper IntermediateOverview: We are looking for a Senior ML Engineer with extensive experience in neural networks (NNs), LSTM architectures, and prescriptive modeling. A background in hydrology or environmental modeling is a strong plus. The engineer will develop and...Overview:
We are looking for a Senior ML Engineer with extensive experience in neural networks (NNs), LSTM architectures, and prescriptive modeling. A background in hydrology or environmental modeling is a strong plus. The engineer will develop and optimize simulation and decision-support models to forecast and mitigate floods, droughts, and compound extreme events.Key Responsibilities:
β’ Predictive Modeling: Design and train neural network models (LSTM, RNN, CNN) for hydrological forecasting and time-series analysis of basin or climate data
β’ Prescriptive Modeling & Simulation: Develop simulation and optimization models using mass balance equations and hydrological process representations
β’ Scenario Analysis: Build models to predict floods, droughts, and extreme events; benchmark simulations against historical and observed datasets
β’ Optimization & Decision Support: Apply simulation/optimization libraries such as Pyomo, Gurobi, SimPy, and metaheuristic approaches for decision-making under uncertainty
β’ Geospatial Integration: Integrate models with geospatial and climate data sources to enable real-time scenario simulations
β’ Explainability & Uncertainty: Embed explainability and uncertainty quantification layers into model outputs to enhance stakeholder trust and interpretability
β’ Transparency & Documentation: Ensure comprehensive model documentation, benchmarking procedures, and reproducibility for scientific and stakeholder validationRequired Qualifications:
β’ Strong background in Neural Networks (NNs), LSTM, and time-series modeling
β’ Proven experience with simulation and optimization frameworks (Pyomo, Gurobi, SimPy, or metaheuristics)
β’ Knowledge of mass balance modeling, hydrological processes, and climate data systems
β’ Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn)
β’ Familiarity with geospatial data integration (GIS, raster/vector datasets)
β’ Experience implementing model explainability and uncertainty assessment
β’ Excellent analytical and documentation skills for model transparency and validationNice to have:
More
β’ Background in hydrology, water resources, or climate modeling
β’ Experience with prescriptive analytics and decision-support systems for environmental domains -
Β· 21 views Β· 0 applications Β· 24d
Big Data Architect
Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper IntermediateOur Client is a world-leading manufacturer of premium quality βsmartβ beds designed to help answer questions around not being able to sleep at night through innovative technologies and digitalized solutions. Our client is a fast-moving, highly technical...Our Client is a world-leading manufacturer of premium quality βsmartβ beds designed to help answer questions around not being able to sleep at night through innovative technologies and digitalized solutions.
Our client is a fast-moving, highly technical team of people with the ambitious goal of bringing people better health and well-being through the best possible sleep experience, aiming to be the leader in sleep. The product combines established expertise in creating comfortable, adjustable beds with the latest in sleep science, cutting-edge sensor technology, and data processing algorithms.
The Role:
As a Big Data Architect, you will be responsible for the leadership and strategic innovation related to the data platform and services, help guide our solution strategies, and develop new technologies and platforms.
We are looking for individuals who have a desire to architect and the ability to rapidly analyze use cases, design technical solutions that meet business needs while adhering to existing standards and postures, and lead multiple technical teams during implementation. Successful candidates will have excellent written and oral communication skills in English; will be comfortable explaining technical concepts to a wide range of audiences, including senior leadership and has deep understanding of modern big data architecture and design practices, patterns and tools.
Job Description
- Total of 10+ years of development/design/architecting experience with a minimum of 5 years experience in Big Data technologies on-prem or on cloud.
- Experience with architecting, building, implementing and managing Big Data platforms On Cloud, covering ingestion (Batch and Real time), processing (Batch and Realtime), Polyglot Storage, Data Analytics and Data Access
- Good understanding of Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog
- Proven understanding and demonstrable implementation experience of big data platform technologies on cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics etc
- Experience working with Enterprise Data Warehouse technologies, Multi-Dimensional Data Modeling, Data Architectures or other work related to the construction of enterprise data assets
- Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring
- Experience building stream-processing systems, using solutions such as Apache Spark, Databricks, Kafka etc...
- Experience with Spark/Databricks technology is a must
- Experience with Big Data querying tools
- Solid skills in Python
- Strong experience with data modelling and schema design
- Strong SQL programming background
- Excellent interpersonal and teamwork skills
- Experience to drive solution/enterprise-level architecture, collaborate with other tech leads
- Strong problem solving, troubleshooting and analysis skills
- Experience working in a geographically distributed team
- Experience with leading and mentorship of other team members
- Good knowledge of Agile Scrum
- Good communication skills
Job Responsibilities
Work directly with the Client teams to understand the requirements/needs and rapidly prototype data and analytics solutions based upon business requirements
More
Architect, Implement and manage large scale data platform/applications including ingestion, processing, storage, data access, data governance capabilities and related infrastructure
Support Design and development of solutions for the deployment of data analytics notebooks, tools, dashboards and reports to various stakeholders
Communication with Product/DevOps/Development/QA team
Architect data pipelines and ETL/ELT processes to connect with various data sources
Design and maintain enterprise data warehouse models
Take part in the performance optimization processes
Guide on research activities (PoC) if necessary
Manage cloud based data & analytics platform
Establishing best practices with CI\CD under Big Data scope -
Β· 60 views Β· 12 applications Β· 24d
Middle Data Scientist
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B1 - IntermediateIn Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Middle Data Scientist...In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Middle Data Scientist to play a key role in reshaping the way we provide our solutions.
You could be a perfect match for the position if
You want to:
- Validate datasets to ensure data accuracy and consistency.
- Design proof-of-concept (POC) solutions to explore new approaches.
- Develop technical solutions by mapping requirements to existing tools and functionalities.
- Train models and create custom approaches for new domains.
- Troubleshoot data processing and model performance issues.
You have:
- 2+ years of experience in data science or a related field.
- Strong SQL skills for data manipulation and extraction.
- Proficiency in Python, with the ability to write modular and readable code for experiments and prototypes.
- A solid mathematical background, preferably in a Computer Science-related field.
- Expertise in scientific Python libraries, including NumPy, pandas, scikit-learn, and either Keras/TensorFlow or PyTorch.
- Familiarity with Time Series Forecasting methodologies.
- Experience in statistical testing, including A/B testing.
- 1+ years working with tabular and multimodal data (e.g., combining tabular data with text, audio, or images).
- Upper-intermediate or higher English level.
Soft skills:
- Analytical mindset and critical thinking to solve complex problems.
- Agile approach, with the ability to experiment and test hypotheses in a dynamic business environment.
- Business-oriented thinking, capable of translating complex models into clear business insights.
- Curiosity and a drive for continuous learning in the data domain.
- Strong team player, able to collaborate across cross-functional teams.
Youβre gonna love it, and hereβs why:
- Rich innovative software stack, freedom to choose the best suitable technologies.
- Remote-first ideology: freedom to operate from the home office or any suitable coworking.
- Flexible working hours (we start from 8 to 11 am) and no time tracking systems on.
- Regular performance and compensation reviews.
- Recurrent 1-1s and measurable OKRs.
- In-depth onboarding with a clear success track.
- Competera covers 70% of your training/course fee.
- 20 vacation days, 15 days off, and up to one week of paid Christmas holidays.
- 20 business days of sick leave.
- Partial medical insurance coverage.
- We reimburse the cost of coworking.
Drive innovations with us. Be a Competerian.
More -
Β· 51 views Β· 8 applications Β· 24d
Middle ML Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateWeβre looking for a Middle ML Engineer to join a long-term outstaff project with an international R&D team. Youβll work on developing, deploying, and monitoring ML solutions in the cloud from data and feature engineering to production-grade model...Weβre looking for a Middle ML Engineer to join a long-term outstaff project with an international R&D team.
Youβll work on developing, deploying, and monitoring ML solutions in the cloud from data and feature engineering to production-grade model deployment.
Requirements
- 3+ years of experience deploying ML models (classical or LLM-based) in cloud environments (AWS/GCP/Azure) using containers and Kubernetes
- Practical experience with MLOps tools: MLflow, Kubeflow, SageMaker, Vertex AI, Feast, or similar
- Hands-on experience building and maintaining data and feature pipelines
- Understanding of real-time model monitoring (drift, latency, performance)
- Solid software engineering background: version control, CI/CD, testing, cost optimization
- English: Upper-Intermediate or higher
Nice to have:
- Experience with PyTorch or TensorFlow
- Familiarity with Spark or Ray
Knowledge of feature stores and data governance
Responsibilities
- Design, implement, and maintain end-to-end ML pipelines for both classical and LLM-based models
- Deploy and manage ML models in AWS/GCP/Azure using Docker and Kubernetes
- Apply MLOps best practices for reproducibility, monitoring, and automation
- Build and maintain robust data and feature pipelines for ML
- Implement real-time monitoring for model drift, latency, and performance
- Collaborate with Data Scientists, DevOps, and Product teams to integrate ML solutions into production
- Contribute to engineering best practices, CI/CD, testing, and cost optimization
-
Β· 20 views Β· 0 applications Β· 22d
Lead Data Scientist
Full Remote Β· Ukraine, Romania Β· 7 years of experience Β· C1 - AdvancedBe part of a pioneering initiative to build the next generation of closed-loop, causal knowledge-generating reinforcement learning systems - driven by proprietary and patented algorithmic methods. Our advanced causal inference and reinforcement learning...Be part of a pioneering initiative to build the next generation of closed-loop, causal knowledge-generating reinforcement learning systems - driven by proprietary and patented algorithmic methods.
Our advanced causal inference and reinforcement learning algorithms are already transforming operations within a Fortune 500 company, powering applications that span e-commerce content generation and targeting, as well as large-scale factory optimization and control.
In this project, youβll help push the boundaries of adaptive learning and real-time decision-making, creating a modular, reusable codebase designed for flexible deployment across diverse, high-impact domains.
A key part of this work involves developing synthetic data generation frameworks to rigorously test and refine algorithmsβensuring they perform with the robustness and intelligence expected in real-world environments.
Requirements:
- 7+ years of experience in designing and implementing statistical computing and reinforcement learning algorithms for real-world systems.
- Expertise in reinforcement learning (RL), including multi-armed bandits, contextual bandits.
- Strong background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
- Proficiency in programming languages for machine learning and statistical computing, particularly Python.
- Experience in synthetic data generation, including stochastic process simulations.
- Excellent problem-solving, communication, and collaboration skills, with the ability to work in a fast-paced research and development environment.
- Bachelorβs, Masterβs or Ph.D. in Statistics, Applied Mathematics, Machine Learning, or a related field.
Preferred:
- Background in experimental design for real-time decision-making.
- Familiarity and experience with bandit-based and reinforcement learning techniques such as Thompson Sampling, LinUCB, and Monitored UCB for dynamic decision-making.
Job responsibilities:
- Collaborate with other scientists to ideate and implement proprietary reinforcement learning models for causal model-based control and adaptive optimization.
- Develop and optimize statistical models for causal inference and real-time decision- making in dynamic environments.
- Create synthetic data generation systems that accurately simulate real-world problem spaces for training and testing.
- Develop multi-objective optimization algorithms to balance competing trade-offs in various applications.
- Enhance adaptive learning approaches that enable the system to self-tune and self-correct based on environmental feedback.
- Validate and refine learning algorithms using synthetic and real-world datasets.
More -
Β· 17 views Β· 0 applications Β· 22d
Senior Data Scientist
Full Remote Β· Ukraine Β· 5 years of experience Β· C1 - AdvancedWe are looking for experienced and visionary Senior Data Scientist to join our clientβs advanced research and development team. Our customer is a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, the...We are looking for experienced and visionary Senior Data Scientist to join our clientβs advanced research and development team. Our customer is a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, the company produces over 60,000 innovative products, spanning adhesives, abrasives, laminates, personal protective equipment, window films, paint protection film, electrical and electronic components, car-care products, electronic circuits, and optical films.
In this role, you will lead the design, development, and deployment of cutting-edge machine learning models and statistical algorithms to solve some of the companyβs most complex business challenges. You will apply your deep expertise in statistical computing, natural language processing, and reinforcement learning to create real-world systems that deliver measurable value.
As a senior team member, you will also shape technical strategy, mentor colleagues, and foster a culture of rigorous experimentation and data-driven excellence, ensuring that innovation translates into tangible impact across the organization.
Requirements:
- A Masterβs or PhD in Statistics, Applied Mathematics, Machine Learning, Computer Science, or a related quantitative field.
- 5+ years of hands-on experience designing and implementing statistical computing, NLP, and/or reinforcement learning algorithms for real-world systems.
- Strong theoretical and practical background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
- Expert-level proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch).
- Proven experience in synthetic data generation, including stochastic process simulations.
- Excellent problem-solving abilities with a creative and analytical mindset.
- Strong communication and collaboration skills, with a proven ability to present complex results to diverse audiences and thrive in a fast-paced R&D environment.
Preferred Qualifications & Skills:
- Industry experience in Consumer Packaged Goods (CPG) or a related field.
- Experience contributing to or developing enterprise data stores (e.g., data meshes, lakehouses).
- Knowledge of MLOps, DevOps methodologies, and CI/CD practices for deploying and managing models in production.
- Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
- Experience working with and consuming data from REST APIs.
Job responsibilities:
- Model Development & Implementation: Lead the end-to-end lifecycle of machine learning projects, from problem formulation and data exploration to designing, building, and deploying advanced statistical, NLP, and/or reinforcement learning models in production environments.
- Advanced Statistical Analysis: Apply a strong background in statistical computing to design and execute complex experiments (including A/B testing, fractional factorial design, and response surface methodology) to optimize systems and products.
- Algorithm & Solution Design: Architect and implement novel algorithms for multi-objective optimization and synthetic data generation, including stochastic process simulations, to solve unique business challenges.
- Technical Leadership & Mentorship: Provide technical guidance and mentorship to junior and mid-level data scientists, fostering their growth through code reviews, knowledge sharing, and collaborative problem-solving.
- Cross-Functional Collaboration: Work closely with product managers, engineers, and business stakeholders to identify opportunities, define project requirements, and translate complex scientific concepts into actionable business insights.
- Research & Innovation: Stay at the forefront of the machine learning and data science fields, continuously researching new techniques and technologies to drive innovation and maintain our competitive edge.
More -
Β· 112 views Β· 21 applications Β· 5d
Game Mathematician
Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· B1 - IntermediateWe are seeking a skilled and innovative Mathematician to join our team. In this role, you will design and develop mathematical models and algorithms that power and improve our existing games. Key Responsibilities: β Design and implement mathematical...We are seeking a skilled and innovative Mathematician to join our team. In this role, you will design and develop mathematical models and algorithms that power and improve our existing games.
Key Responsibilities:
β Design and implement mathematical models for slot machines and lotteries
β Develop models for marketing-related calculations and analytics
β Analyze game data and provide actionable insights to improve game mechanics and player engagement
β Collaborate closely with software developers, game designers, and other team members to optimize game math
β Stay updated on industry trends and innovations in mathematics and statistics within online gambling
β Contribute to the development and maintenance of technical and certification documentation related to game mathWhat kind of professional are we looking for?
β Advanced degree in Mathematics, Statistics, or a related field
β Solid understanding of probability theory, statistical analysis, and algorithm development
β Hands-on experience with tools such as MATLAB, R, or Python for modeling and data analysis
β Strong problem-solving skills and the ability to work independently
β Excellent communication and collaboration skills
β High proficiency in Ukrainian or RussianWe Offer:
β You can work from any part of the world (remote work)
β Working hours: 10:00/11:00 β 19:00/20:00, Mon-Fri
β Job in an international company
β The opportunity to join a fast-growing team of professionals and a cool product
β Stable and decent salary (based on interview results)
β An opportunity for career and professional growth
β An opportunity to implement your ideas
β Ability to implement ambitious projects
β Promptness of decision-making, absence of bureaucracy
More -
Β· 108 views Β· 11 applications Β· 20d
AI / Data Scientist
Full Remote Β· Ukraine Β· 1 year of experience Β· B2 - Upper IntermediateWeβre looking for a hands-on AI / Data Scientist who enjoys solving complex problems with data and turning insights into impactful solutions. In this role, youβll work on designing, building, and deploying models that extract intelligence from...Weβre looking for a hands-on AI / Data Scientist who enjoys solving complex problems with data and turning insights into impactful solutions. In this role, youβll work on designing, building, and deploying models that extract intelligence from large-scale, heterogeneous datasets β including network, behavioral, and textual data. Youβll collaborate closely with AI engineers, developers, and security experts to shape features, pipelines, and analytical tools that make the internet a safer place.
Key Responsibilities:
- Manipulate, clean, and transform complex, high-volume datasets for modeling and analysis
- Design and implement feature engineering strategies that enhance model accuracy, interpretability, and robustness
- Build and evaluate machine learning models for anomaly detection, fraud identification, forecasting, and optimization
- Explore and prototype solutions using Python (Pandas, NumPy, Scikit-learn, PyTorch, TensorFlow) and SQL
- Translate product or business requirements into clear analytical tasks and model development plans
- Contribute to MLOps workflows for model deployment, validation, and continuous monitoring
- Conduct error analysis and model tuning to ensure reliability and efficiency
Main requirements:
- 1+ years of hands-on experience in DS/ ML
- Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, PyTorch, TensorFlow)
- Proven ability to handle messy, large-scale, and multi-source data
- English: Upper-Intermediate+, both written and spoken
- Experience with classical machine learning algorithms and a solid understanding of their strengths and limitations
- Familiarity with NLP techniques and modern deep learning approaches
- Strong analytical mindset and ability to develop well-justified, data-driven solutions
- Good command of SQL for data querying and preparation
The benefits you will get:
- Ability to influence processes and best practices
- Opportunity to expand your technical background
- Long-term cooperation with teammates and clients
- Compensation of internal and external English language training
- Flexible schedule
- Paid vacation and sick leave
- Our accountants will take care of the taxes
- Legal support
-
Β· 21 views Β· 0 applications Β· 18d
Data Architect (AWS and Python FastAPI)
Full Remote Β· Ukraine Β· 6 years of experience Β· B2 - Upper IntermediateClient Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and...Client
Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles with cross-linked insights, rankings, and more β all in one unified place.
Position overview
We are seeking a skilled Data Architect with strong expertise in AWS technologies (Step Functions, Lambda, RDS - PostgreSQL), Python, and SQL to lead the design and implementation of the platformβs data architecture. This role involves defining data models, building ingestion pipelines, applying AI-driven entity resolution, and managing scalable, cost-effective infrastructure aligned with cloud best practices.
Responsibilities
- Define entities, relationships, and persistent IDs; enforce the Fact schema with confidence scores, timestamps, validation status, and source metadata.
- Blueprint ingestion workflows from law firm site feeds; normalize data, extract entities, classify content, and route low-confidence items for review.
- Develop a hybrid of deterministic rules and LLM-assisted matching; configure thresholds for auto-accept, manual review, or rejection.
- Specify Ops Portal checkpoints, data queues, SLAs, and create a corrections/version history model.
- Stage phased rollout of data sourcesβfrom ingestion through processing, storage, replication, to management via CMS.
- Align architecture with AWS and Postgres baselines; design for scalability, appropriate storage tiers, and cost-effective compute and queuing solutions.
Requirements
- Proven experience as a Data Architect or Senior Data Engineer working extensively with AWS services.
- Strong proficiency in Python development, preferably with FastAPI or similar modern frameworks.
- Deep understanding of data modeling principles, entity resolution, and schema design for complex data systems.
- Hands-on experience designing and managing scalable data pipelines, workflows, and AI-driven data processing.
- Familiarity with relational databases such as PostgreSQL.
- Solid experience in data architecture, including data modelling. Knowledge of different data architectures such as Medallion architecture, Dimensional modelling
- Strong knowledge of cloud infrastructure cost optimization and performance tuning.
- Excellent problem-solving skills and ability to work in a collaborative, agile environment.
Nice to have
- Experience within legal tech or recruiting data domains.
- Familiarity with Content Management Systems (CMS) for managing data sources.
- Knowledge of data privacy, security regulations, and compliance standards.
- Experience with web scraping.
- Experience with EMR and SageMaker.