Jobs Data Scientist

98
  • Β· 40 views Β· 4 applications Β· 9d

    Senior Data Scientist to $8000

    Full Remote Β· Worldwide Β· 6 years of experience Β· English - B2
    Our Customer: Our customer is a global, data-driven digital advertising technology company operating at large scale across mobile apps and web platforms. Your Tasks: Analyze large, complex datasets to uncover patterns, insights, and business...

    Our Customer:

    Our customer is a global, data-driven digital advertising technology company operating at large scale across mobile apps and web platforms.
     

    Your Tasks:

    • Analyze large, complex datasets to uncover patterns, insights, and business opportunities;
    • Design, build, and own scalable, end-to-end data pipelines;
    • Develop and productionize advanced machine learning models and algorithms, particularly in monetization-related domains;
    • Build and maintain data models for real-time (online) processing;
    • Contribute to the creation of new data-driven products and continuously improve existing ones;
    • Collaborate with Product, BI, Analytics, DevOps, R&D, and Marketing teams to deliver complete solutions;
    • Lead full production releases, including requirements analysis, testing, monitoring, result evaluation, and rapid response to critical issues.
       

    Required Experience and Skills:

    • 6+ years of experience as a Data Scientist with a strong foundation in machine learning concepts and models;
    • Previous experience in the Ad-Tech industry;
    • Hands-on experience working on large-scale, high-load systems processing billions of requests per second;
    • Hands-on experience with Python including writing production-ready code and working with modern Python frameworks and tooling;
    • Experience with data science / ML libraries and models, such as regression and classification models (e.g. XGBoost, LightGBM), recommendation systems, and related ML techniques;
    • Solid understanding of databases and SQL for data analysis and retrieval;
    • Experience with monitoring and alerting tools such as Grafana and Kibana;
    • Proven experience working in real-time / online environments;
    • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field;
    • Strong end-to-end ownership mindset: ability to take solutions from research and experimentation through implementation and production;
    • Fluent English with strong communication and collaboration skills.
       

    Would Be a Plus:

    • Familiarity with the programmatic advertising ecosystem (DSPs, SSPs, ad exchanges);
    • Experience with relational databases (e.g. Vertica, VoltDB) and non-relational databases (e.g. MongoDB);
    • Experience working with PMML.
       

    Working conditions

    5-day working week, 8-hour working day;

    Remote work.

    More
  • Β· 26 views Β· 1 application Β· 10d

    Senior Machine Learning Engineer

    Full Remote Β· Poland, Ukraine Β· 5 years of experience Β· English - B2
    A leading mobile marketing and audience platform, empowers the app ecosystem with cutting-edge solutions in mobile marketing, audience building, and monetization. With integration into over 500,000 monthly active apps and a global reach, the platform...

    A leading mobile marketing and audience platform, empowers the app ecosystem with cutting-edge solutions in mobile marketing, audience building, and monetization. With integration into over 500,000 monthly active apps and a global reach, the platform leverages first-party data to deliver impactful and scalable advertising solutions.

    We’re looking for a highly skilled, independent, and driven Machine Learning Engineer to lead the design and development of our next-generation real-time inference services - the core engine powering algorithmic decision-making at scale. This is a rare opportunity to own the system at the heart of our product, serving billions of daily requests across mobile apps, with tight latency and performance constraints.

    You’ll work at the intersection of machine learning, large-scale backend engineering, and business logic, building robust services that blend predictive models with dynamic, engineering logic - all while maintaining extreme performance and reliability requirements.

    Description:

    • Own and lead the design and development of low-latency Algo inference services handling billions of requests per day.
    • Build and scale robust real-time decision-making engines, integrating ML models with business logic under strict SLAs.
    • Collaborate closely with DS to deploy models seamlessly and reliably in production.
    • Design systems for model versioning, shadowing, and A/B testing at runtime.
    • Ensure high availability, scalability, and observability of production systems.
    • Continuously optimize latency, throughput, and cost-efficiency using modern tooling and techniques.
    • Work independently while interfacing with cross-functional stakeholders from Algo, Infra, Product, Engineering, BA & Business.

    Requirements:

    • B.Sc. or M.Sc. in Computer Science, Software Engineering, or a related technical discipline.
    • 5+ years of experience building high-performance backend or ML inference systems.
    • Deep expertise in Python and experience with low-latency APIs and real-time serving frameworks (e.g., FastAPI, Triton Inference Server, TorchServe, BentoML).
    • Experience with scalable service architecture, message queues (Kafka, Pub/Sub), and async processing.
    • Strong understanding of model deployment practices, online/offline feature parity, and real-time monitoring.
    • Experience in cloud environments (AWS, GCP, or OCI) and container orchestration (Kubernetes).
    • Experience working with in-memory and NoSQL databases (e.g. Aerospike, Redis, Bigtable) to support ultra-fast data access in production-grade ML services.
    • Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and best practices for alerting and diagnostics.
    • A strong sense of ownership and the ability to drive solutions end-to-end.
    • Passion for performance, clean architecture, and impactful systems.

    Why join us?

    • Lead the mission-critical inference engine that drives our core product.
    • Join a high-caliber Algo group solving real-time, large-scale, high-stakes problems.
    • Work on systems where every millisecond matters, and every decision drives real value.
    • Enjoy a fast-paced, collaborative, and empowered culture with full ownership of your domain.

    What we offer:

    • Polish public holidays.
    • 20 working days per year is Non-Operational Allowance and settled to be used for personal recreation matters and are compensated in full. These days have to be used within the year, with no rollover to the next calendar year.
    • Health Insurance.
    • Gym Subscription (Multisport).
    More
  • Β· 59 views Β· 3 applications Β· 10d

    Computer Vision/Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the role:
    We are looking for a Computer Vision / Machine Learning Engineer to develop offline CV models for industrial visual inspection.


    Your main task will be to design, train, and evaluate models on inspection data in order to:

     

    • Improve discrimination between good vs. defect samples
    • Provide insights into key defect categories (e.g., terminal electrode irregularities, surface chipping)
    • Significantly reduce false-positive rates, optimizing for either precision, or recall
    • Prepare the solution for future deployment, scaling, and maintenance
    •  

    Key Responsibilities:
    Data Analysis & Preparation
    - Conduct dataset audits, including class balance checks and sample quality reviews
    - Identify low-frequency defect classes and outliers
    - Design and implement augmentation strategies for rare defects and edge cases
    Model Development & Evaluation
    - Train deep-learning models on inspection images for defect detection
    - Use modern computer vision / deep learning frameworks (e.g., PyTorch, TensorFlow)
    - Evaluate models using confusion matrices, ROC curves, precision–recall curves, F1 scores and other relevant metrics
    - Analyze false positives/false negatives and propose thresholds or model improvements
    Reporting & Communication
    - Prepare clear offline performance reports and model evaluation summaries
    - Explain classifier decisions, limitations, and reliability in simple, non-technical language when needed
    - Provide recommendations for scalable deployment in later phases (e.g., edge / on-prem inference, integration patterns)

    Candidate Requirements:
    Must-have:
    - 1-2 years of hands-on experience with computer vision and deep learning (classification, detection, or segmentation)
    - Strong proficiency in Python and at least one major DL framework (PyTorch or TensorFlow/Keras)
    - Solid understanding of:

    • Image preprocessing and augmentation techniques
    • Classification metrics: accuracy, precision, recall, F1, confusion matrix, ROC, PR curves
    • Handling imbalanced datasets and low-frequency classes

    - Experience training and evaluating offline models on real production or near-production datasets
    - Ability to structure and document experiments, compare baselines, and justify design decisions
    - Strong analytical and problem-solving skills; attention to detail in data quality and labelling
    - Good communication skills in English (written and spoken) to interact with internal and client stakeholders

    Nice-to-have:
    - Experience with industrial / manufacturing computer vision (AOI, quality inspection, defect detection, etc.)
    - Familiarity with ML Ops/deployment concepts (ONNX, TensorRT, Docker, REST APIs, edge devices)
    - Experience working with time-critical or high-throughput inspection systems
    - Background in electronics, semiconductors, or similar domains is an advantage
    - Experience preparing client-facing reports and presenting technical results to non-ML audiences

    We offer:
    - Free English classes with a native speaker and external courses compensation;
    - PE support by professional accountants;
    - 40 days of PTO;
    - Medical insurance;
    - Team-building events, conferences, meetups, and other activities;
    - There are many other benefits you’ll find out at the interview.

    More
  • Β· 20 views Β· 3 applications Β· 10d

    Senior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· English - B2
    Our Customer: Our customer is a global, data-driven digital advertising technology company operating at large scale across mobile apps and web platforms. Your Tasks: Analyze large, complex datasets to uncover patterns, insights, and business...

    Our Customer:

    Our customer is a global, data-driven digital advertising technology company operating at large scale across mobile apps and web platforms.

     

    Your Tasks:

    • Analyze large, complex datasets to uncover patterns, insights, and business opportunities;
    • Design, build, and own scalable, end-to-end data pipelines;
    • Develop and productionize advanced machine learning models and algorithms, particularly in monetization-related domains;
    • Build and maintain data models for real-time (online) processing;
    • Contribute to the creation of new data-driven products and continuously improve existing ones;
    • Collaborate with Product, BI, Analytics, DevOps, R&D, and Marketing teams to deliver complete solutions;
    • Lead full production releases, including requirements analysis, testing, monitoring, result evaluation, and rapid response to critical issues.

     

    Required Experience and Skills:

    • 6+ years of experience as a Data Scientist with a strong foundation in machine learning concepts and models;
    • Previous experience in the Ad-Tech industry;
    • Hands-on experience working on large-scale, high-load systems processing billions of requests per second;
    • Hands-on experience with Python including writing production-ready code and working with modern Python frameworks and tooling;
    • Experience with data science / ML libraries and models, such as regression and classification models (e.g. XGBoost, LightGBM), recommendation systems, and related ML techniques;
    • Solid understanding of databases and SQL for data analysis and retrieval;
    • Experience with monitoring and alerting tools such as Grafana and Kibana;
    • Proven experience working in real-time / online environments;
    • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field;
    • Strong end-to-end ownership mindset: ability to take solutions from research and experimentation through implementation and production;
    • Fluent English with strong communication and collaboration skills.

     

    Would Be a Plus:

    • Familiarity with the programmatic advertising ecosystem (DSPs, SSPs, ad exchanges);
    • Experience with relational databases (e.g. Vertica, VoltDB) and non-relational databases (e.g. MongoDB);
    • Experience working with PMML.

     

    Working conditions

    • 5-day working week, 8-hour working day;
    • Remote work.
    More
  • Β· 29 views Β· 1 application Β· 11d

    Data Scientist / Machine Learning Engineer - AI at Massive Scale

    Office Work Β· Ukraine (Dnipro, Lviv) Β· Product Β· 3 years of experience Β· English - B1
    Help us push AI further β€” and faster LoopMe’s Data Science team builds production AI that powers real-time decisions for campaigns seen by hundreds of millions of people every day. We process billions of data points daily β€” and we don’t just re-apply old...

    Help us push AI further β€” and faster

    LoopMe’s Data Science team builds production AI that powers real-time decisions for campaigns seen by hundreds of millions of people every day. We process billions of data points daily β€” and we don’t just re-apply old tricks. We design and deploy genuinely novel machine learning systems, from idea to prototype to production.

    You’ll join a high-trust team that has a 5-star Glassdoor rating led by Leonard Newnham, where your work moves fast, ships to production, and makes measurable impact.

     

    What you’ll do:

    • Design, build, and run large-scale ML pipelines that process terabytes of data
    • Apply a mix of supervised learning, custom algorithms, and statistical modelling to real-world problems
    • Ship production-grade Python code that’s clear, documented, and tested
    • Work in small, agile squads (3–4 people) with DS, ML, and engineering peers
    • Partner with product and engineering to take models from idea β†’ production β†’ impact
    • Work with Google Cloud, Docker, Kafka, Spark, Airflow, ElasticSearch, ClickHouse and more

     

    What you bring:

    • Bachelor’s degree in Computer Science, Maths, Engineering, Physics or similar (MSc/PhD a plus)
    • 3+ years’ commercial Python experience
    • Track record building ML pipelines that handle large-scale data
    • Excellent communication skills β€” comfortable working across time zones
    • A curious, scientific mindset β€” you ask β€œwhy?” and prove the answer

     

    Bonus if you have:

    • Experience with adtech or real-time bidding
    • Agile / Scrum experience
    • Knowledge of high-availability infrastructure (ElasticSearch, Kafka, ClickHouse)
    • Airflow expertise

     

    About the Data Science Team:

    We’re 17 ML engineers, data scientists, and data engineers, distributed across London, Poland, and Ukraine β€” acting as one team, not a satellite office.

    What sets us apart:

    • Led by an experienced Chief Data Scientist who codes, leads, and listens
    • Inclusive, supportive culture where ideas are heard and people stay
    • Strong values: open communication, continual innovation, fair treatment, and high standards
    • Track record of publishing award-winning research in automated bidding

    Don’t just take our word for it β€” check our Glassdoor reviews (search β€œData Scientist”) for a real view of the culture.

     

    About LoopMe:

    LoopMe was founded to close the loop on brand advertising. Our platform combines AI, mobile data, and attribution to deliver measurable brand outcomes β€” from purchase intent to foot traffic. Founded in 2012, we now have offices in New York, London, Chicago, LA, Dnipro, Singapore, Beijing, Dubai and more.

     

    What we offer:

    • Competitive salary + bonus
    • Billions of real-world data points to work with daily
    • Flexible remote/hybrid options
    • Learning budget and career growth support
    • Friendly, transparent culture with strong leadership

     

    Hiring process:

    1. Intro with Talent Partner
    2. 30-min technical interview with Chief Data Scientist
    3. Panel with 2 team members (technical, culture & collaboration)
    4. Offer – usually within 48 hours of final round

     

    Are you ready to design and deploy AI systems that run at truly massive scale?

    More
  • Β· 42 views Β· 0 applications Β· 11d

    Machine Learning Engineer

    Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B2
    As a Machine Learning Engineer, you'll work as part of the Wix CTO Office team, researching problems that can give Wix’s products a competitive edge across various challenges. A key focus of the team is on Agents over LLMs, exploring new techniques for...

    As a Machine Learning Engineer, you'll work as part of the Wix CTO Office team, researching problems that can give Wix’s products a competitive edge across various challenges. A key focus of the team is on Agents over LLMs, exploring new techniques for building agents and developing innovative products that leverage them.  

     

    In your day-to-day, you will:  

    • Build POCs for research projects led by the team  
    • Evaluate results and provide actionable insights  
    • Collaborate with different teams at Wix to advance their agent implementations  
    • Build shared infrastructure for agents  

       

    Requirements

    • Creativity and willingness to tackle ambitious, high-risk problems  
    • 3+ years of experience in working on production code with active users  
    • BSc in Computer Science or related field, MSc preferred  
    • Proficient in Python; TypeScript is a significant advantage  
    • Experience in training and evaluating Machine Learning models  
    • Hands-on experience with building GenAI systems using LLMs and agents  
    • Proven ability to work in a collaborative, cross-functional environment  
    • Excellent written and verbal communication skills in English 

     

    About the Team  

    We are Wix's Data Science CTO Office team, a small group of researchers and engineers. We collaborate with various groups at Wix and the CEO on innovative research projects. Some projects aim to enhance Wix products with new features, while others focus on strategic research areas that can provide Wix with a competitive advantage.

     

    More
  • Β· 24 views Β· 1 application Β· 11d

    Data Scientist

    Full Remote Β· Poland Β· 3 years of experience Β· English - B2
    Hello everyone At Intobi, we're a software and product development company passionate about driving innovation and progress. We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs. Our expertise lies in...

    Hello everyone πŸ‘‹

    At Intobi, we're a software and product development company passionate about driving innovation and progress.

    We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs.

    Our expertise lies in developing cutting-edge Web and Mobile applications.

     

    We are seeking an experienced Mid/Mid+ Data Scientist with expertise in Large Language Models (LLMs) such as GPT, Claude, and related technologies. The ideal candidate will have a strong background in natural language processing (NLP), machine learning, and deep learning models. They will play a critical role in developing and deploying cutting-edge LLM applications to drive innovation across our product lines.

     

    Responsibilities:

    β€Œ

    β€” Design, develop, and optimize Large Language Models for various NLP tasks such as text generation, summarization, translation, and question-answering

    β€” Conduct research and experiments to push the boundaries of LLM capabilities and performance

    β€” Collaborate with cross-functional teams (engineering, product, research) to integrate LLMs into product offerings

    β€” Develop tools, pipelines and infrastructure to streamline LLM training, deployment and monitoring

    β€” Analyze and interpret model outputs, investigate errors/anomalies, and implement strategies to improve accuracy

    β€” Stay current with the latest advancements in LLMs, NLP and machine learning research

    β€” Communicate complex technical concepts to both technical and non-technical stakeholders

     

    Requirements:

    β€Œ

    β€” MS or PhD degree in Computer Science, Data Science, AI, or a related quantitative field

    β€” 2-3+ years of hands-on experience developing and working with deep learning models, especially in NLP/LLMs

    β€” Expert knowledge of Python, PyTorch, TensorFlow, and common deep learning libraries

    β€” Strong understanding of language models, attention mechanisms, transformers, sequence-to-sequence modeling

    β€” Experience training and fine-tuning large language models

    β€” Experience with classical ML models - XGBoost, LightGBM

    β€” Proficiency in model deployment, optimization, scaling, and serving

    β€” Excellent problem-solving, analytical, and quantitative abilities

    β€” Strong communication skills to present technical information clearly

    β€” Ability to work collaboratively in a team environment

    β€” Fluency in Ukrainian and English

     

    Preferred:

    β€” Research experience in LLMs, NLP, machine learning

    β€” Experience working with multi-modal data (text, image, audio)

    β€” Knowledge of cloud platforms like AWS, GCP for model training

    β€” Understanding of MLOps and production ML workflows

    β€” Background in information retrieval, knowledge graphs, reasoning

     

     

    Please send your CV here or via email

     

    Should the first stage be successfully completed, you’ll be invited to a personal interview.

    More
  • Β· 45 views Β· 9 applications Β· 11d

    Data Scientist

    Full Remote Β· Poland Β· 3 years of experience Β· English - B2
    Hello everyone At Intobi, we're a software and product development company passionate about driving innovation and progress. We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs. Our expertise lies in...

    Hello everyone πŸ‘‹

    At Intobi, we're a software and product development company passionate about driving innovation and progress.

    We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs.

    Our expertise lies in developing cutting-edge Web and Mobile applications.

     

    We are seeking an experienced Mid/Mid+ Data Scientist with expertise in Large Language Models (LLMs) such as GPT, Claude, and related technologies. The ideal candidate will have a strong background in natural language processing (NLP), machine learning, and deep learning models. They will play a critical role in developing and deploying cutting-edge LLM applications to drive innovation across our product lines.

     

    Responsibilities:

    β€Œ

    β€” Design, develop and optimize Large Language Models for various NLP tasks such as text generation, summarization, translation, and question-answering

    β€” Conduct research and experiments to push the boundaries of LLM capabilities and performance

    β€” Collaborate with cross-functional teams (engineering, product, research) to integrate LLMs into product offerings

    β€” Develop tools, pipelines and infrastructure to streamline LLM training, deployment and monitoring

    β€” Analyze and interpret model outputs, investigate errors/anomalies, and implement strategies to improve accuracy

    β€” Stay current with the latest advancements in LLMs, NLP and machine learning research

    β€” Communicate complex technical concepts to both technical and non-technical stakeholders

     

    Requirements:

    β€Œ

    β€” MS or PhD degree in Computer Science, Data Science, AI, or a related quantitative field

    β€” 2-3+ years of hands-on experience developing and working with deep learning models, especially in NLP/LLMs

    β€” Expert knowledge of Python, PyTorch, TensorFlow, and common deep learning libraries

    β€” Strong understanding of language models, attention mechanisms, transformers, sequence-to-sequence modeling

    β€” Experience training and fine-tuning large language models

    β€” Experience with classical ML models - XGBoost, LightGBM

    β€” Proficiency in model deployment, optimization, scaling and serving

    β€” Excellent problem-solving, analytical and quantitative abilities

    β€” Strong communication skills to present technical information clearly

    β€” Ability to work collaboratively in a team environment

    β€” Fluency in Ukrainian and English

     

    Preferred:

    β€” Research experience in LLMs, NLP, machine learning

    β€” Experience working with multi-modal data (text, image, audio)

    β€” Knowledge of cloud platforms like AWS, GCP for model training

    β€” Understanding of MLOps and production ML workflows

    β€” Background in information retrieval, knowledge graphs, reasoning

     

     

    Please send your CV here or via email

     

    Should the first stage be successfully completed, you’ll be invited to a personal interview.

    More
  • Β· 39 views Β· 5 applications Β· 11d

    Game Mathematician (iGaming / Slots) project-based position

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· English - B1
    We are looking for a Game Mathematician (project-based position) to work on slot games within an iGaming product. This role is for someone who can turn mathematics into an engaging and well-balanced gaming experience, works confidently with probabilities,...

    We are looking for a Game Mathematician (project-based position) to work on slot games within an iGaming product. This role is for someone who can turn mathematics into an engaging and well-balanced gaming experience, works confidently with probabilities, RTP, and simulations, and collaborates effectively with developers and QA.

     

    Responsibilities

    • Design and calculate mathematical models for slot games (RTP, volatility, hit rate, max win).
    • Build and balance core game elements: paytables, reels/weights, wild & scatter logic, free spins, respins, multipliers, bonus features.
    • Prepare complete mathematical documentation for implementation: formulas, rules, tables, edge cases, configurations.
    • Run and analyze Monte Carlo simulations to validate RTP, payout distribution, and volatility.
    • Prepare math validation reports.
    • Support developers and QA during integration and testing phases.

       

    Optional / Additional Responsibilities

    • Research iGaming trends and competitor mechanics.
    • Participate in the full game lifecycle (from concept to release).
    • Analyze player behavior using mathematical and statistical data.

       

    Requirements

    • Commercial experience in slot math/casino game mathematics.
    • Strong understanding of probability theory, combinatorics, and statistics.
    • Clear understanding of RTP structure (base vs bonus contribution), volatility management, and feature triggers.
    • Ability to produce clear and well-structured technical documentation for engineering teams.
    • Strong analytical thinking, attention to detail, and ownership of results.

       

    Nice to Have

    • Confident knowledge of Python.
    • Experience building simulation scripts/tools (config-driven, reproducible simulations, result export).
    • Experience with jackpot / progressive models.
    • Understanding of compliance and regulatory requirements in iGaming.

       

    What We Offer

    • Work on real slot games in the iGaming domain.
    • Direct impact on core gameplay and the mathematical model of the product.
    • Collaboration with an experienced team (game design, development, QA).
    • Long-term cooperation and a stable project.
    • Real Product, Real Impact: Work on a live product, not an experimental project.
    • Flexible Work Environment: Work from anywhere globally with a competitive compensation package.
    • Dynamic Team: Join a group of passionate game development experts who share your love for creativity and innovation.
    • Fun and Supportive Culture: We foster a collaborative, supportive team environment where your ideas and contributions are valued.
    More
  • Β· 102 views Β· 8 applications Β· 12d

    Head of Data Science

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About Everstake Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including...

    About Everstake
    Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including Solana, Ethereum, Cosmos, and many others. By building secure, scalable, and reliable blockchain infrastructure, we support the growth of the global Web3 ecosystem and enable the adoption of decentralized technologies worldwide.
     

    About the Role
    We are looking for a Head of Data Science to own and scale Everstake’s data science and analytics function. This is a hands-on leadership role with a strong technical focus. You will define the data science direction, lead senior-level engineers, and work closely with the CDO, product, engineering, and business teams to drive data-informed decisions across a complex Web3 infrastructure.
    You will be responsible not only for analytics and modeling, but also for data architecture, orchestration, performance, reliability, and engineering standards in a fast-growing blockchain environment.

    Key Responsibilities:

    • Own and evolve data science and analytics architecture across Everstake
    • Design and maintain scalable data pipelines, metrics layers, and analytical models
    • Lead technical decision-making across data platforms, BI, and orchestration
    • Translate blockchain, product, and business problems into clear data solutions
    • Define data standards, best practices, and development guidelines
    • Review code, data models, and pipelines for quality, performance, and correctness
    • Mentor senior data scientists and analysts, provide technical leadership
    • Partner closely with product, backend, infrastructure, and finance teams
    • Ensure data reliability, observability, and correctness in production
    • Actively contribute hands-on where technical depth is required


    Requirements (Must-Have):
    Seniority & Leadership

    • 6+ years of professional experience in data-related roles
    • Strong experience as a Senior / Lead Data Scientist or Analytics Engineer
    • Proven ability to lead technically strong teams and initiatives
    • Ability to balance hands-on execution with leadership responsibilities

     

    Core Technical Skills

    • Python β€” expert level (data processing, analytics, modeling, production code)
    • Apache Airflow β€” 2–3+ years of hands-on experience
       (DAG design, dependencies, retries, backfills, monitoring, failure handling)
       

    Databases & Warehouses

    • ClickHouse (performance tuning, large-scale analytics)
    • PostgreSQL
    • Snowflake


    BI & Analytics

    • Power BI and/or Tableau
    • Strong understanding of semantic layers, metrics definitions, and data modeling 


    Infrastructure & Observability

    • Docker
    • Git
    • Grafana (monitoring data pipelines and platform health)
       

    Data & Systems Thinking

    • Strong understanding of data modeling (facts, dimensions, slowly changing data)
    • Experience designing KPIs and metrics that actually reflect business reality
    • Ability to identify incorrect assumptions, misleading metrics, and data biases
    • Experience working with high-volume, high-frequency, or near–real-time data
    • Strong SQL skills and performance-oriented thinking


    Blockchain / Crypto Domain (Required)

    • Practical experience in blockchain, crypto, or Web3 products
    • Experience working with blockchain-derived datasets or crypto-financial metrics
    • Ability to reason about probabilistic, noisy, and incomplete on-chain data
    • Understanding of: Blockchain mechanics (validators, staking, rewards, transactions)
    • Wallets, addresses, and transaction flows
    • On-chain vs off-chain data


    Soft Skills:

    • Systems and critical thinking
    • Strong communication skills with technical and non-technical stakeholders
    • Team-oriented mindset with high ownership and accountability
    • Fluent English (B2+ or higher)


    Nice-to-Have:

    • Experience in staking, DeFi, or blockchain infrastructure companies
    • Background in analytics engineering or data platform teams
    • Experience building data systems from scratch or scaling them significantly
    • Familiarity with financial or yield-related metrics
    • Experience working in globally distributed teams


    What We Offer:

    • Opportunity to work on mission-critical Web3 infrastructure used globally
    • Head-level role with real influence on data and technical strategy
    • Fully remote work format
    • Competitive compensation aligned with experience and seniority
    • Professional growth in a top-tier Web3 engineering organization
    • Strong engineering culture with focus on quality, ownership, and impact
    More
  • Β· 30 views Β· 3 applications Β· 12d

    Data Architect

    Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience Β· English - B2
    The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The company’s medical devices are used in a variety of interventional medical specialties, including interventional...

    The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The company’s medical devices are used in a variety of interventional medical specialties, including interventional cardiology, peripheral interventions, vascular surgery, electrophysiology, neurovascular intervention, oncology, endoscopy, urology, gynecology, and neuromodulation.
    The client’s mission is to improve the quality of patient care and the productivity of health care delivery through the development and advocacy of less-invasive medical devices and procedures. This is accomplished through the continuing refinement of existing products and procedures and the investigation and development of new technologies that can reduce risk, trauma, cost, procedure time and the need for aftercare.




    Job Description

    Boston Scientific is seeking a highly motivated R&D Data Engineer to support our R&D team in data management and development of complex electro-mechanical medical device systems. In this role you will use your technical and collaboration skills alongside your passion for data, innovation, and continuous improvement to help drive our product development forward.

    β€’ Design a systems level architecture for clinical, device, and imaging data and pipelines to support machine learning & classical algorithm development throughout the product lifecycle.
    β€’ Ensure architecture supports high-throughput image ingestion, indexing, and retrieval.
    β€’ Advance conceptual, logical, and physical data models for structured, semi-structured, and unstructured data.
    β€’ Help define and document data standards and definitions.
    β€’ Implement governance frameworks that enforce healthcare and data regulations to data architecture (HIPAA, FDA Part 11, GDPR, etc.).
    β€’ Performs strategic validation tasks of data management tools and platforms
    β€’ Collaborate closely with data scientists, cloud data engineers, algorithm engineers, clinical engineers, software engineers and systems engineers locally and globally.
    β€’ Investigate, research, and recommend appropriate software designs, machine learning operations, tools for dataset organization, controls, and traceability.
    β€’ In all actions, lead with integrity and demonstrate a primary commitment to patient safety and product quality by maintaining compliance to all documented quality processes and procedures.

    Job Responsibilities

    Required Qualifications
    β€’ Bachelor’s degree or higher in Computer Science, Software Engineering, Data Science, Biomedical Engineering or related field
    β€’ 6+ Years of relevant work experience with Bachelor's degree
    β€’ 3+ Years of relevant work experience with Masters or PhD
    β€’ 4+ years of consistent coding in Python
    β€’ Strong understanding and use of relational databases and clinical data models
    β€’ Experience working with medical imaging data (DICOM) computer vision algorithms and tools
    β€’ Experience with AWS and cloud technologies and AWS DevOps tools
    β€’ Experience with creating and managing CI/CD pipelines in AWS
    β€’ Experience with Infrastructure as Code (IaC) using Terraform, CloudFormation or AWS CDK
    β€’ Excellent organizational, communication, and collaboration skills
    β€’ Foundational knowledge in machine learning (ML) operations and imaging ML pipelines

    Preferred Qualifications
    β€’ Experience with software validation in a regulated industry
    β€’ Experience with Cloud imaging tools (ex. AWS Health Imaging, Azure Health Data Services)
    β€’ Working knowledge of data de-identification/pseudonymization methods
    β€’ Manipulating tabular metadata using SQL and Python’s Pandas library
    β€’ Experience with the Atlassian Tool Chain
    β€’ Data and annotation version control tools and processes
    β€’ Knowledge of HIPAA, FDA regulations (21 CFR Part 11), GDPR for medical device data governance.

    More
  • Β· 41 views Β· 3 applications Β· 12d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· English - B1 Ukrainian Product πŸ‡ΊπŸ‡¦
    Hello! We are E-Com, a team of Foodtech and Ukrainian product lovers. And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming. What we are...

    Hello!

     

    We are E-Com, a team of Foodtech and Ukrainian product lovers.

    And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming.

     

    What we are currently working on:

    • we are upgrading the existing delivery of a wide range of products from Silpo stores;
    • we are developing super-fast delivery of products and dishes under the new LOKO brand.

     

    We are developing a next-generation Decision Support Platform that connects demand  planning, operational orchestration, and in-store execution optimization into one unified Analytics and  Machine Learning Ecosystem. 

     

    The project focuses on three major streams: 

    β€’  Demand & Forecasting Intelligence: building short-term demand forecasting models, generating  granular demand signals for operational planning, identifying anomalies, and supporting commercial  decision logic across virtual warehouse clusters. 

    β€’  Operational Orchestration & Task Optimization: designing predictive models for workload  estimation, task duration (ETA), and prioritization. Developing algorithms that automatically map  operational needs into structured tasks and optimize their sequencing and allocation across teams. 

    β€’  In-Store Execution & Routing Optimization: developing models that optimize picker movement,  predict in-store congestion, and recommend optimal routes and execution flows. Integrating store  layout geometry, product characteristics, and operational constraints to enhance dark-store  efficiency. 

     

    You will join a cross-functional team to design and implement data-driven decision module that directly  influence commercial and operational decisions. 

     

    Responsibilities:

    β€’  develop and maintain ML models for forecasting short-term demand signals and detecting anomalies  across virtual warehouse clusters;

    β€’  build predictive models to estimate task workload, execution times (ETA), and expected operational  performance;

    β€’  design algorithms to optimize task distribution, sequencing, and prioritization across operational  teams;

    β€’  develop routing and path-optimization models to improve picker movement efficiency within dark  stores; 

    β€’  construct data-driven decision modules that integrate commercial rules, operational constraints, and  geometric layouts;

    β€’  translate business requirements into ML-supported decision flows and automate key parts of  operational logic; 

    β€’  build SQL pipelines and data transformations for commercial, operations, and logistics datasets;

    β€’  work closely with supply chain, dark store operations, category management, and IT to deliver  measurable improvements;

    β€’  conduct A/B testing, validate model impact, and ensure high-quality model monitoring. 

     

    Requirements:

    β€’  bachelor’s Degree in Mathematics / Quantitative Economics / Econometrics / Statistics / Computer  Sciences / Finance; 

    β€’  at least 2 years working experience on Data Science; 

    β€’  strong mathematical background in Linear algebra, Probability, Statistics & Optimization Techniques; 

    β€’  proven experience with SQL (Window functions, CTEs, joins) and Python;

    β€’  expertise in Machine Learning, Time Series Analysis and application of Statistical Concepts  (Hypothesis testing, A/B tests, PCA); 

    β€’  ability to work independently and decompose complex problems. 

     

    Preferred:

    β€’  experience with Airflow, Docker, or Kubernetes for Data Orchestration; 

    β€’  practical experience with Amazon SageMaker: training, deploying, and monitoring ML models in a  production environment; 

    β€’  knowledge of Reporting and Business Intelligence Software (Power BI, Tableau, Looker); 

    β€’  ability to design and deliver packaged analytical/ML solutions. 

     

    What we offer

    • competitive salary;
    • opportunity to work on flagship projects impacting millions of users;
    • flexible remote or office-based work (with backup power and reliable connectivity at SilverBreeze Business Center);
    • flexible working schedule;
    • medical and Life insurance packages;
    • support for GIG contract or private entrepreneurship arrangements;
    • discounts at Fozzy Group stores and restaurants;
    • psychological support services;
    • Caring corporate culture;
    • a team where you can implement your ideas, experiment, and feel like you are among friends.
    More
  • Β· 41 views Β· 8 applications Β· 12d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Domain: iGaming / Gambling Format: Product company Experience: 5+ years About the Company We are an international product company operating in the gambling sector. Our platform delivers real-time personalization for casino and sportsbook products using...

    Domain: iGaming / Gambling
    Format: Product company
    Experience: 5+ years


    About the Company

    We are an international product company operating in the gambling sector. Our platform delivers real-time personalization for casino and sportsbook products using advanced machine learning. The solution processes live behavioral, transactional, and contextual data to improve player engagement, retention, and overall performance for operators worldwide.


    Our focus is on building production-grade ML systems that directly influence what users see in real time β€” from game recommendations to personalized content and promotions.


    Role Overview

    We are looking for a Senior Data Scientist to join a product-focused team working on real-time personalization and recommendation systems for iGaming platforms.

    This is a hands-on role that combines modeling, experimentation, and close collaboration with engineering and product teams in a high-load, real-time production environment.


    Main Responsibilities

    • Develop ML-driven features for casino products using supervised learning (regression, ranking, classification)
    • Maintain and improve existing recommendation systems in production
    • Enhance models using gradient boosting and other supervised approaches
    • Perform data cleaning, preprocessing, and feature engineering
    • Design and maintain pre- and post-processing workflows
    • Optimize training and inference pipelines for performance and reliability
    • Integrate ML models into Airflow pipelines in a multi-tenant environment
    • Adapt and configure the solution for different clients (tenants)
    • Collaborate closely with product and engineering teams on experimentation and feature delivery


    As Part of the Team You Will

    • Work cross-functionally with data scientists, engineers, product owners, designers, and researchers
    • Analyze large-scale datasets to extract insights for product and business decisions
    • Propose, implement, and evaluate ML approaches to solve real business problems
    • Support and evolve a recommendation solution used across multiple tenants
    • Influence product strategy through research, experimentation, and data-driven insights into user behavior


    Experience & Education

    • 5+ years of professional experience in data science
    • Degree in a quantitative field (Mathematics, Statistics, Computer Science, or similar)


    Core Skills

    • Strong proficiency in Python and SQL
    • Hands-on experience with data processing tools (Pandas, Polars)
    • Solid engineering skills for building and maintaining scalable ML systems
    • Experience implementing observability in ML pipelines (metrics, logging, alerting)
    • Knowledge of Docker and Kubernetes
    • Strong analytical mindset with the ability to solve loosely defined problems
    • Hands-on experience with supervised ML techniques, including:
      • Regression and ranking models (XGBoost, LightGBM, CatBoost, neural networks)
      • Feature engineering and model evaluation (AUC, NDCG, MSE, uplift metrics)
      • Personalization or recommendation systems
    • Proven experience deploying ML models to production (near real-time or batch)
    • Solid understanding of statistical methods (A/B testing, significance testing)


    Nice to Have

    • Production experience with large-scale recommendation systems
    • Experience with Airflow, Valkey/Redis, FastAPI in production
    • Familiarity with contextual bandits or reinforcement learning
    • Experience with AutoML tools
    More
  • Β· 61 views Β· 4 applications Β· 12d

    Computer Vision/Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the role:
    We are looking for a Computer Vision / Machine Learning Engineer to develop offline CV models for industrial visual inspection.


    Your main task will be to design, train, and evaluate models on inspection data in order to:

     

    • Improve discrimination between good vs. defect samples
    • Provide insights into key defect categories (e.g., terminal electrode irregularities, surface chipping)
    • Significantly reduce false-positive rates, optimizing for either precision, or recall
    • Prepare the solution for future deployment, scaling, and maintenance
    •  

    Key Responsibilities:
    Data Analysis & Preparation
    - Conduct dataset audits, including class balance checks and sample quality reviews
    - Identify low-frequency defect classes and outliers
    - Design and implement augmentation strategies for rare defects and edge cases
    Model Development & Evaluation
    - Train deep-learning models on inspection images for defect detection
    - Use modern computer vision / deep learning frameworks (e.g., PyTorch, TensorFlow)
    - Evaluate models using confusion matrices, ROC curves, precision–recall curves, F1 scores and other relevant metrics
    - Analyze false positives/false negatives and propose thresholds or model improvements
    Reporting & Communication
    - Prepare clear offline performance reports and model evaluation summaries
    - Explain classifier decisions, limitations, and reliability in simple, non-technical language when needed
    - Provide recommendations for scalable deployment in later phases (e.g., edge / on-prem inference, integration patterns)

    Candidate Requirements:
    Must-have:
    - 1-2 years of hands-on experience with computer vision and deep learning (classification, detection, or segmentation)
    - Strong proficiency in Python and at least one major DL framework (PyTorch or TensorFlow/Keras)
    - Solid understanding of:

    • Image preprocessing and augmentation techniques
    • Classification metrics: accuracy, precision, recall, F1, confusion matrix, ROC, PR curves
    • Handling imbalanced datasets and low-frequency classes

    - Experience training and evaluating offline models on real production or near-production datasets
    - Ability to structure and document experiments, compare baselines, and justify design decisions
    - Strong analytical and problem-solving skills; attention to detail in data quality and labelling
    - Good communication skills in English (written and spoken) to interact with internal and client stakeholders

    Nice-to-have:
    - Experience with industrial / manufacturing computer vision (AOI, quality inspection, defect detection, etc.)
    - Familiarity with ML Ops/deployment concepts (ONNX, TensorRT, Docker, REST APIs, edge devices)
    - Experience working with time-critical or high-throughput inspection systems
    - Background in electronics, semiconductors, or similar domains is an advantage
    - Experience preparing client-facing reports and presenting technical results to non-ML audiences

    We offer:
    - Free English classes with a native speaker and external courses compensation;
    - PE support by professional accountants;
    - 40 days of PTO;
    - Medical insurance;
    - Team-building events, conferences, meetups, and other activities;
    - There are many other benefits you’ll find out at the interview.

    More
  • Β· 35 views Β· 3 applications Β· 12d

    Data Scientist – Autonomous Systems (Computer Vision)

    Office Work Β· Ukraine (Kyiv, Lviv) Β· Product Β· 2 years of experience Β· English - None MilTech πŸͺ–
    We are looking for a Data Scientist eager to grow in the field of autonomous systems, with a focus on computer vision, control theory, and data-driven modeling. This role is ideal for someone with strong analytical skills and a passion for applying data...

    We are looking for a Data Scientist eager to grow in the field of autonomous systems, with a focus on computer vision, control theory, and data-driven modeling. This role is ideal for someone with strong analytical skills and a passion for applying data science to real-world autonomy challenges.

    Key Responsibilities

    • Assist in developing vision-based algorithms for perception and navigation.
    • Support data analysis and sensor fusion for multi-sensor systems.
    • Contribute to modeling and simulation tasks under guidance from senior engineers.
    • Work with datasets from cameras, IMUs, and other sensors to extract insights.
    • Stay up-to-date with recent research in computer vision, autonomy, and data science.

    Required Qualifications

    • 2+ years of experience in computer vision or data analysis.
    • Understanding of geometric computer vision principles.
    • Basic knowledge of control theory, PID controllers, signal processing, and data-driven modeling.
    • Programming skills in Python and C++.
    • Familiarity with Linux and single-board computers.
    • Strong willingness to learn and adapt quickly.
    • Relevant work experience or education in STEM field

    Nice to Have

    • Exposure to SLAM or Visual-Inertial Odometry (VIO).
    • Familiarity with OpenCV, NumPy, and basic ML frameworks (PyTorch, TensorFlow).
    • Knowledge of ROS2, Gazebo, or AirSim.
    • Experience with PX4, Betaflight, or ArduPilot.
    • Basic understanding of neural networks and CV frameworks.
    • Interest in reinforcement learning or predictive modeling.
    More
Log In or Sign Up to see all posted jobs