Jobs Data Science

70
  • Β· 43 views Β· 9 applications Β· 28d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    We are looking for an experienced Data Scientist for a full-time job to join our team. Requirements: - 2+ years of experience as a data scientist - BSc in Mathematics, Statistics, Computer Science, Economics, or another related field. - Experience in...

    We are looking for an experienced Data Scientist for a full-time job to join our team.


    Requirements:
    - 2+ years of experience as a data scientist
    - BSc in Mathematics, Statistics, Computer Science, Economics, or another related field.
    - Experience in using Python & SQL
    - Experience with Airflow and GCP
    - Experience with Git & CI/CD
    - Upper Intermediate English β€” written and spoken
    - Ability to design creative solutions for complex requirements
    - Ability to learn and lead projects independently, and to work with minimal supervision with customers (tech & business)

    Responsibilities:
    - Conduct independent research, including defining research problems, creating research plans, designing experiments, developing algorithms, implementing code, and performing comprehensive comparisons against existing benchmarks;
    - Clearly communicate your research findings to both technical and non-technical audiences
    - Work on various data sources and apply sophisticated feature engineering capabilities
    - Bring and use business knowledge
    - Build and manage technical relationships with customers and partners.

    We offer:
    - remote time job, B2B contract
    - 12 sick leaves and 18 paid vacation business days per year
    - Comfortable work conditions (including MacBook Pro and Dell monitor on each workplace)
    - Smart environment
    - Interesting projects from renowned clients
    - Flexible work schedule
    - Competitive salary according to the qualifications
    - Guaranteed full workload during the term of the contract
     

    More
  • Β· 75 views Β· 5 applications Β· 27d

    Middle Data Scientist (Operations Digital Twin)

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· English - B2 Ukrainian Product πŸ‡ΊπŸ‡¦
    About us Fozzy Group is one of the largest trade industrial groups in Ukraine and one of the leading Ukrainian retailers, with over 700 outlets all around the country. It is also engaged in e-commerce, food processing & production, agricultural...

    About us
     

    Fozzy Group is one of the largest trade industrial groups in Ukraine and one of the leading Ukrainian retailers,

    with over 700 outlets all around the country. It is also engaged in e-commerce, food processing & production,

    agricultural business, parcel delivery, logistics and banking. 

    Since its inception in 1997, Fozzy Group has focused on making innovative business improvements, creating

    new opportunities for the market and further developing the industry as a whole.
     

    Job Description:
     

    The Foodtech team is looking for a Data Scientist to develop the Operational Analytics function for a fast[1]growing food delivery business. In this role, you will focus on time series forecasting, regression modeling,

    simulation modeling, and end-to-end machine learning pipelines to support resource planning and

    operational decision-making.

    You will be responsible for developing simulation-based models that serve as a foundation for a digital twin

    of operational processes, enabling scenario analysis, stress testing, and what-if simulations for capacity

    planning and operational optimization.

    You will work closely with product, engineering, and operations teams to transform data into measurable

    business impact through production-ready ML and simulation solutions.
     

    Job Responsibilities
     

    β€’ Develop and implement time series forecasting models for resource planning (demand, capacity,

    couriers, delivery slots, operational load);

    β€’ Build regression and machine learning models to explain key drivers and support operational

    decisions;

    β€’ Apply a wide range of time series approaches from classical models (SARIMA, ETS, Prophet) and

    ML models (GB) to modern Deep Learning models (LSTM, Temporal CNNs, Transformers for TS);

    β€’ Design, build, and maintain end-to-end automated ML pipelines, deploy and operate models in

    production using AWS SageMaker;

    β€’ Orchestrate training and inference workflows with Apache Airflow;

    β€’ Analyze large-scale operational datasets and convert results into insights, forecasts, and actionable

    recommendations;

    β€’ Collaborate with product managers, engineers, and operations teams to define business problems

    and validate analytical solutions;

    β€’ Monitor model performance, forecast stability, and business impact over time.
     

    Requirements
     

    β€’ Bachelor’s Degree in Mathematics / Engineering / Computer Sciences / Quantitative Economics /

    Econometrics;

    β€’ Strong mathematical background in Linear algebra, Probability, Statistics & Optimization Techniques;

    β€’ At least 2 years working experience on Data Science;

    β€’ Experience of the full cycle of model implementation (data collection, model training and evaluation,

    model deployment and monitoring);

    β€’ Ability to work independently, proactively, and to decompose complex problems into actionable tasks.
     

    Skills

    Must Have
     

    β€’ Strong proficiency in Python with solid application of object-oriented programming (OOP) principles

    (modular design, reusable components, maintainable code);

    β€’ Solid experience in time series forecasting and regression modeling;

    β€’ Practical knowledge of:

    o Classical and ML forecasting techniques;

    o Statistical methods (hypothesis testing, confidence intervals, A/B testing);

    β€’ Advanced SQL skills (window functions, complex queries);

    β€’ Experience building automated ML pipelines;

    β€’ Understanding of MLOps principles (model versioning, monitoring, CI/CD for ML).
     

    Preferred
     

    β€’ Hands-on experience with AWS SageMaker (training jobs, endpoints, model registry);

    β€’ Experience with Apache Airflow for data and ML workflow orchestration;

    β€’ Knowledge of Reporting and Business Intelligence Software (Power BI, Tableau);

    β€’ Experience working with large-scale production data systems.
     

    What We Offer
     

    β€’ Competitive salary;

    β€’ Professional & personal development opportunities;

    β€’ Being part of dynamic team of young & ambitious professionals;

    β€’ Corporate discounts for sport clubs and language courses;

    β€’ Medical insurance package

    More
  • Β· 29 views Β· 4 applications Β· 26d

    Data Scientist

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    We are looking for you! We are seeking a Senior Data Scientist to drive the next generation of data-driven solutions. This role calls for deep expertise in data architecture, advanced analytics, and pipeline design. If you are a seasoned professional...

    We are looking for you!

    We are seeking a Senior Data Scientist to drive the next generation of data-driven solutions. This role calls for deep expertise in data architecture, advanced analytics, and pipeline design. If you are a seasoned professional ready to lead initiatives, innovate with cutting-edge techniques, and deliver impactful data solutions, we’d be excited to have you join our journey.

    Contract type: Gig contract
     

    Skills and experience you can bring to this role

    Qualifications & experience:

    • At least 3 years of commercial experience with Python, Data Stack (NumPy, Pandas, scikit-learn) and web stack (Fast API / Flask / Django);
    • Familiarity with one or more machine learning frameworks (XGBoost, TensorFlow, PyTorch);
    • Strong mathematical and statistical skills;
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Knowledge of Amazon Web Services (AWS) ecosystem (S3, Glue, Athena);
    • Experience with at least one MMM or marketing analytics framework (e.g., Robyn, PyMC Merydian or similar);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

    Nice to have:

    • Knowledge of digital advertising platforms (Google Ads, DV360, Meta, Amazon, etc.) and campaign performance metrics;
    • Exposure to clean rooms (Google Ads Data Hub, Amazon Marketing Cloud);
    • Familiarity with industry and syndicated data sources (Nielsen, Kantar etc);
    • Experience with optimisation techniques (budget allocation, constrained optimisation);
    • Familiarity with gen AI (ChatGPT APIs/agents, prompt engineering, RAG, vector databases). 

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred. A Master's degree or higher is a distinct advantage.

    What impact you’ll make 

    • Build and validate marketing measurement models (e.g., MMM, attribution) to understand the impact of media spend on business outcomes;
    • Develop and maintain data pipelines and transformations to prepare campaign, performance, and contextual data for modelling;
    • Run exploratory analyses to uncover trends, correlations, and drivers of campaign performance;
    • Support the design of budget optimisation and scenario planning tools;
    • Collaborate with engineers, analysts, and planners to operationalise models into workflows and dashboards;
    • Translate model outputs into clear, actionable recommendations for client and internal teams.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 27 views Β· 8 applications Β· 25d

    Middle Machine Learning Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the Role:
    We’re looking for a mid-level AI Engineer to help test, deploy, and integrate cutting-edge generative AI models into production experiences centered around human avatars and 3D content. You’ll work directly with the CEO to turn R&D prototypes into stable, scalable products.

    Responsibilities:
    - Experiment with and evaluate generative models for:

    • Human avatar creation and animation;
    • 3D reconstruction and modeling;
    • Gaussian splatting–based pipelines;
    • Generalized NeRF (Neural Radiance Fields) techniques.

    - Turn research code and models into production-ready services (APIs, microservices, or batch pipelines).
    - Build and maintain Python-based tooling for data preprocessing, training, evaluation, and inference.
    - Design and optimize cloud-based deployment workflows (e.g., containers, GPUs, inference endpoints, job queues).
    - Integrate models into user-facing applications in collaboration with product, design, and frontend teams.
    - Monitor model performance, reliability, and cost; propose and implement improvements.
    - Stay up-to-date on relevant research and help prioritize which techniques to test and adopt.

    Required Qualifications:
    - 3–5+ years of experience as an ML/AI Engineer or similar role.
    - Strong Python skills and experience with one or more deep learning frameworks (PyTorch preferred).
    - Hands-on experience with deploying ML models to cloud environments (AWS, GCP, Azure, or similar) including containers (Docker) and basic CI/CD workflows.
    - Familiarity with 3D data formats and pipelines (meshes, point clouds, volumetric representations, etc.).
    - Practical exposure to one or more of the following (professional or serious personal projects):

    • NeRFs or NeRF-like methods;
    • Gaussian splatting / 3D Gaussian fields;
    • Avatar generation / face-body reconstruction / pose estimation;
    • Comfort working in an iterative, fast-paced environment directly with leadership (reporting to CEO).


    Nice-to-Haves:
    - Experience with real-time rendering pipelines (e.g., Unity, Unreal, WebGL) or GPU programming (CUDA).
    - Experience optimizing inference performance and cost (model distillation, quantization, batching)
    - Background in computer vision, graphics, or related fields (academic or industry).

    More
  • Β· 22 views Β· 4 applications Β· 25d

    Senior Computer Vision Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Job Description 4+ years of experience in computer vision or related fields. Strong knowledge of machine learning/deep learning. Hands-on experience with object detectors, instance segmentation, keypoint/pose detection, RNNs/Transformers (computer vision...

    Job Description

    4+ years of experience in computer vision or related fields.
    Strong knowledge of machine learning/deep learning.
    Hands-on experience with object detectors, instance segmentation, keypoint/pose detection, RNNs/Transformers (computer vision in time domain), tracking algorithms.
    Proficiency in programming languages such as Python and C++ (optional).
    Extensive experience with computer vision libraries and frameworks like PyTorch, and OpenCV.
    Familiarity with image processing techniques and annotation tools.
    Experience with hardware integration for vision-based systems.
    Good understanding of best practices of software development (code reviews, TDD, Git, etc.)
    Advanced English (written and verbal) for daily communication with the customer.
    Efficiency in remote development on Linux (VMs, on-premise machines).
    Would be a Plus:
    Understanding of real-time processing and optimization.
    Experience with edge AI deployment.
    Experience with optimization and inference libraries (ONNX, TensorRT, OpenVINO, etc.).
    Linux development.
    Experience in medical computer vision.
    Experience in managing huge volumes of visual data.
    Ability to leverage advanced nVidia ADA GPU features to speedup training and inference.
    Experience with CPU and GPU profiling (nVidia nsight, cProfile)

     

    Job Responsibilities

    Contribute to the design, development, code review, and testing of computer vision algorithms and systems.
    Develop new features and improve existing functionality in vision-based projects.
    Work on the integration of computer vision solutions with third-party tools and hardware.
    Collaborate with cross-functional teams to deliver high-quality, compliant products.
    Stay updated with the latest advancements in computer vision and machine learning technologies.

     

    Department/Project Description

    As a Computer Vision Engineer, you will join a mature and senior team dedicated to developing cutting-edge computer vision solutions for medical applications. Our projects range from advanced image processing to real-time vision systems, contributing to fields like medical devices, robotics, autonomous vehicles and others. We emphasize technical excellence and offer a stimulating environment that encourages innovation and professional growth.

    More
  • Β· 59 views Β· 20 applications Β· 22d

    Data Scientist / ML Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B2
    We’re hiring: Data Scientist / ML Engineer Product: E-commerce solution with AI integration Format: Remote What you’ll do Build and improve ML models (risk/anomaly-style scenarios) Analyze patterns, validate hypotheses, and iterate on models Support...

    We’re hiring: Data Scientist / ML Engineer
    Product: E-commerce solution with AI integration
    Format: Remote

    What you’ll do

    • Build and improve ML models (risk/anomaly-style scenarios)
    • Analyze patterns, validate hypotheses, and iterate on models
    • Support deployment and monitoring in collaboration with engineering/product

    Requirements

    • 3+ years of experience
    • Hands-on ML/DS workflow experience (data prep, modeling, evaluation)
    • Ability to work with production data and deliver measurable outcomes.

    What we offer

    • 24 paid vacation days/year (after probation)
    • Paid sick leave (after probation)
    • Remote work option
    • Company-supported English courses
    More
  • Β· 111 views Β· 33 applications Β· 11d

    Senior AI / Machine Learning Engineer to $6500

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - B2
    About Tie Tie is building the next generation of identity resolution and marketing intelligence. Our platform connects hundreds of millions of consumers across devices, browsers, and channelsβ€”without relying on cookiesβ€”to power higher deliverability,...

    About Tie

    Tie is building the next generation of identity resolution and marketing intelligence. Our platform connects hundreds of millions of consumers across devices, browsers, and channelsβ€”without relying on cookiesβ€”to power higher deliverability, smarter targeting, and measurable revenue lift for modern marketing teams.

    At Tie, AI is not a featureβ€”it is a core execution advantage. We operate large-scale identity graphs, real-time scoring systems, and production ML pipelines that directly impact revenue, deliverability, and customer growth.

    The Role

    We are looking for a Senior AI / Machine Learning Engineer to design, build, and deploy production ML systems that sit at the heart of our identity graph and scoring platform. You will work at the intersection of machine learning, graph data, and real-time systems, owning models end to endβ€”from feature engineering and training through deployment, monitoring, and iteration.

    This role is highly hands-on and impact-driven. You will help define Tie’s ML architecture, ship models that operate at sub-second latency, and partner closely with platform engineering to ensure our AI systems scale reliably.

    What You’ll Do

    • Design and deploy production-grade ML models for identity resolution, propensity scoring, deliverability, and personalization
    • Build and maintain feature pipelines across batch and real-time systems (BigQuery, streaming events, graph-derived features)
    • Develop and optimize classification models (e.g., XGBoost, logistic regression) with strong handling of class imbalance and noisy labels
    • Integrate ML models directly with graph databases to support real-time inference and identity scoring
    • Own model lifecycle concerns: evaluation, monitoring, drift detection, retraining, and performance reporting
    • Partner with engineering to expose models via low-latency APIs and scalable services
    • Contribute to GPU-accelerated and large-scale data processing efforts as we push graph computation from hours to minutes
    • Help shape ML best practices, tooling, and standards across the team

    What You’ll Bring

    Required Qualifications

    • 5+ years of experience building and deploying machine learning systems in production
    • Strong proficiency in Python for ML, data processing, and model serving
    • Hands-on experience with feature engineering, model training, and evaluation for real-world datasets
    • Ability to travel outside of Ukraine is a must
    • Experience deploying ML models via APIs or services (e.g., FastAPI, containers, Kubernetes)
    • Solid understanding of data modeling, SQL, and analytical workflows
    • Experience working in a cloud environment (GCP, AWS, or equivalent)
    • Experience with graph data, graph databases, or graph-based ML
    • Familiarity with Neo4j, Cypher, or graph algorithms (community detection, entity resolution)

      Preferred / Bonus Experience

    • Experience with XGBoost, tree-based models, or similar classical ML approaches
    • Exposure to real-time or streaming systems (Kafka, Pub/Sub, event-driven architectures)
    • Experience with MLOps tooling and practices (CI/CD for ML, monitoring, retraining pipelines)
    • GPU or large-scale data processing experience (e.g., RAPIDS, CUDA, Spark, or similar)
    • Domain experience in identity resolution, marketing technology, or email deliverability

    Our Technology Stack

    • ML & Data: Python, Pandas, Scikit-learn, XGBoost
    • Graphs: Neo4j (Enterprise, GDS)
    • Cloud: Google Cloud Platform (BigQuery, Vertex AI, Cloud Run, Pub/Sub)
    • Infrastructure: Docker, Kubernetes, GitHub Actions
    • APIs: FastAPI, REST-based inference services

    What We Offer

    • Competitive compensation, including salary, equity, and performance incentives
    • Opportunity to work on core AI systems that directly impact revenue and product differentiation
    • High ownership and autonomy in a senior, hands-on role
    • Remote-first culture with a strong engineering and data focus
    • Exposure to cutting-edge problems in identity resolution, graph ML, and real-time AI systems
    • Clear growth path toward Staff / Principal IC roles

      What else:

    • 4 weeks of paid vacation per year (flexible scheduling)
    • Unlimited sick leave β€” we trust your judgment and care about your health
    • US Bank Holidays off (American calendar)
    • Remote-first culture and flexible working hours
    • Flat structure, no micromanagement, and full ownership
    • Opportunity to make a real impact during a critical growth phase

      Interview Process

    • Recruitment Screening Call
    • Initial call with Head of Data Science & AI and CTO  (30 min) in English
    • Technical deep dive Interview (1,5h)  in English
    • Optional test-task (paid)

      Why Join Us?

    • High-impact delivery leadership role during a critical period
    • Real ownership and autonomy
    • Opportunity to shape delivery across the entire engineering organization
    • Exposure to SaaS, data, integrations, automation, and platform work
    • Collaboration with global teams and vendors
    • A strong product with real scale and momentum

    Why This Role Matters

    At Tie, your work will not live in notebooks or experimentsβ€”it will power production systems used by real customers at scale. You will help define how AI is embedded into the company’s core platform and play a key role in making machine learning a durable competitive advantage.

     

    More
  • Β· 73 views Β· 5 applications Β· 1d

    Head of Data Science

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About Everstake Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including...

    About Everstake
    Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including Solana, Ethereum, Cosmos, and many others. By building secure, scalable, and reliable blockchain infrastructure, we support the growth of the global Web3 ecosystem and enable the adoption of decentralized technologies worldwide.
     

    About the Role
    We are looking for a Head of Data Science to own and scale Everstake’s data science and analytics function. This is a hands-on leadership role with a strong technical focus. You will define the data science direction, lead senior-level engineers, and work closely with the CDO, product, engineering, and business teams to drive data-informed decisions across a complex Web3 infrastructure.
    You will be responsible not only for analytics and modeling, but also for data architecture, orchestration, performance, reliability, and engineering standards in a fast-growing blockchain environment.

    Key Responsibilities:

    • Own and evolve data science and analytics architecture across Everstake
    • Design and maintain scalable data pipelines, metrics layers, and analytical models
    • Lead technical decision-making across data platforms, BI, and orchestration
    • Translate blockchain, product, and business problems into clear data solutions
    • Define data standards, best practices, and development guidelines
    • Review code, data models, and pipelines for quality, performance, and correctness
    • Mentor senior data scientists and analysts, provide technical leadership
    • Partner closely with product, backend, infrastructure, and finance teams
    • Ensure data reliability, observability, and correctness in production
    • Actively contribute hands-on where technical depth is required


    Requirements (Must-Have):
    Seniority & Leadership

    • 6+ years of professional experience in data-related roles
    • Strong experience as a Senior / Lead Data Scientist or Analytics Engineer
    • Proven ability to lead technically strong teams and initiatives
    • Ability to balance hands-on execution with leadership responsibilities

     

    Core Technical Skills

    • Python β€” expert level (data processing, analytics, modeling, production code)
    • Apache Airflow β€” 2–3+ years of hands-on experience
       (DAG design, dependencies, retries, backfills, monitoring, failure handling)
       

    Databases & Warehouses

    • ClickHouse (performance tuning, large-scale analytics)
    • PostgreSQL
    • Snowflake


    BI & Analytics

    • Power BI and/or Tableau
    • Strong understanding of semantic layers, metrics definitions, and data modeling 


    Infrastructure & Observability

    • Docker
    • Git
    • Grafana (monitoring data pipelines and platform health)
       

    Data & Systems Thinking

    • Strong understanding of data modeling (facts, dimensions, slowly changing data)
    • Experience designing KPIs and metrics that actually reflect business reality
    • Ability to identify incorrect assumptions, misleading metrics, and data biases
    • Experience working with high-volume, high-frequency, or near–real-time data
    • Strong SQL skills and performance-oriented thinking


    Blockchain / Crypto Domain (Required)

    • Practical experience in blockchain, crypto, or Web3 products
    • Experience working with blockchain-derived datasets or crypto-financial metrics
    • Ability to reason about probabilistic, noisy, and incomplete on-chain data
    • Understanding of: Blockchain mechanics (validators, staking, rewards, transactions)
    • Wallets, addresses, and transaction flows
    • On-chain vs off-chain data


    Soft Skills:

    • Systems and critical thinking
    • Strong communication skills with technical and non-technical stakeholders
    • Team-oriented mindset with high ownership and accountability
    • Fluent English (B2+ or higher)


    Nice-to-Have:

    • Experience in staking, DeFi, or blockchain infrastructure companies
    • Background in analytics engineering or data platform teams
    • Experience building data systems from scratch or scaling them significantly
    • Familiarity with financial or yield-related metrics
    • Experience working in globally distributed teams


    What We Offer:

    • Opportunity to work on mission-critical Web3 infrastructure used globally
    • Head-level role with real influence on data and technical strategy
    • Fully remote work format
    • Competitive compensation aligned with experience and seniority
    • Professional growth in a top-tier Web3 engineering organization
    • Strong engineering culture with focus on quality, ownership, and impact
    More
  • Β· 40 views Β· 5 applications Β· 15d

    Head of Data Science

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About Everstake Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including...

    About Everstake
    Everstake is the largest decentralized staking provider in Ukraine and one of the top 5 blockchain validators worldwide. We help institutional and retail investors participate in staking across more than 85 blockchain networks, including Solana, Ethereum, Cosmos, and many others. By building secure, scalable, and reliable blockchain infrastructure, we support the growth of the global Web3 ecosystem and enable the adoption of decentralized technologies worldwide.
     

    About the Role
    We are looking for a Head of Data Science to own and scale Everstake’s data science and analytics function. This is a hands-on leadership role with a strong technical focus. You will define the data science direction, lead senior-level engineers, and work closely with the CDO, product, engineering, and business teams to drive data-informed decisions across a complex Web3 infrastructure.
    You will be responsible not only for analytics and modeling, but also for data architecture, orchestration, performance, reliability, and engineering standards in a fast-growing blockchain environment.

    Key Responsibilities:

    • Own and evolve data science and analytics architecture across Everstake
    • Design and maintain scalable data pipelines, metrics layers, and analytical models
    • Lead technical decision-making across data platforms, BI, and orchestration
    • Translate blockchain, product, and business problems into clear data solutions
    • Define data standards, best practices, and development guidelines
    • Review code, data models, and pipelines for quality, performance, and correctness
    • Mentor senior data scientists and analysts, provide technical leadership
    • Partner closely with product, backend, infrastructure, and finance teams
    • Ensure data reliability, observability, and correctness in production
    • Actively contribute hands-on where technical depth is required


    Requirements (Must-Have):
    Seniority & Leadership

    • 6+ years of professional experience in data-related roles
    • Strong experience as a Senior / Lead Data Scientist or Analytics Engineer
    • Proven ability to lead technically strong teams and initiatives
    • Ability to balance hands-on execution with leadership responsibilities

     

    Core Technical Skills

    • Python β€” expert level (data processing, analytics, modeling, production code)
    • Apache Airflow β€” 2–3+ years of hands-on experience
       (DAG design, dependencies, retries, backfills, monitoring, failure handling)
       

    Databases & Warehouses

    • ClickHouse (performance tuning, large-scale analytics)
    • PostgreSQL
    • Snowflake


    BI & Analytics

    • Power BI and/or Tableau
    • Strong understanding of semantic layers, metrics definitions, and data modeling 


    Infrastructure & Observability

    • Docker
    • Git
    • Grafana (monitoring data pipelines and platform health)
       

    Data & Systems Thinking

    • Strong understanding of data modeling (facts, dimensions, slowly changing data)
    • Experience designing KPIs and metrics that actually reflect business reality
    • Ability to identify incorrect assumptions, misleading metrics, and data biases
    • Experience working with high-volume, high-frequency, or near–real-time data
    • Strong SQL skills and performance-oriented thinking


    Blockchain / Crypto Domain (Required)

    • Practical experience in blockchain, crypto, or Web3 products
    • Experience working with blockchain-derived datasets or crypto-financial metrics
    • Ability to reason about probabilistic, noisy, and incomplete on-chain data
    • Understanding of: Blockchain mechanics (validators, staking, rewards, transactions)
    • Wallets, addresses, and transaction flows
    • On-chain vs off-chain data


    Soft Skills:

    • Systems and critical thinking
    • Strong communication skills with technical and non-technical stakeholders
    • Team-oriented mindset with high ownership and accountability
    • Fluent English (B2+ or higher)


    Nice-to-Have:

    • Experience in staking, DeFi, or blockchain infrastructure companies
    • Background in analytics engineering or data platform teams
    • Experience building data systems from scratch or scaling them significantly
    • Familiarity with financial or yield-related metrics
    • Experience working in globally distributed teams


    What We Offer:

    • Opportunity to work on mission-critical Web3 infrastructure used globally
    • Head-level role with real influence on data and technical strategy
    • Fully remote work format
    • Competitive compensation aligned with experience and seniority
    • Professional growth in a top-tier Web3 engineering organization
    • Strong engineering culture with focus on quality, ownership, and impact
    More
  • Β· 289 views Β· 61 applications Β· 14d

    Junior Data Scientist

    Full Remote Β· Worldwide Β· English - B2
    As a Junior Data Scientist, you will contribute to the development and delivery of data science products, working alongside senior data scientists. You will be involved in implementing and refining supervised learning, bandit algorithms, and generative AI...

    As a Junior Data Scientist, you will contribute to the development and delivery of data science products, working alongside senior data scientists. You will be involved in implementing and refining supervised learning, bandit algorithms, and generative AI models, as well as supporting
    experimentation and analysis.
    You will write production-quality Python code, collaborate on cloud-based deployments, and help translate data insights into actionable recommendations that drive business impact. This role provides hands-on experience while allowing you to take ownership of well-scoped components
    within larger projects.
    This is a fantastic opportunity for an early-career data scientist with an analytical background to join and grow within a market leading digital content agency and media network.
     

    • CORE RESPONSIBILITIES

      Model Development: Assist in developing, testing, and improving machine learning
      models, with a focus on bandit algorithms and experimentation frameworks.
      Experimentation: Support the setup, execution, and analysis of A/B tests and online experiments to evaluate the impact of our generative AI-driven products.
      Production Support: Assist with deploying and monitoring models and experiments on GCP (Airflow, Docker, Cloud Run, SQL databases, etc.), following existing patterns and CI/CD workflows.
      Data Analysis: Perform exploratory data analysis, data validation, and basic feature engineering to support modelling and experimentation efforts.
      Collaboration: Work closely with senior data scientists, engineers, and product stakeholders to understand business problems and translate them into actionable tasks.

      SKILLS REQUIRED FOR THIS ROLE
      Essential Functional/Job-specific skills

      β€’ Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or
      a related field with 1+ years of relevant work experience.
      β€’ Solid foundation in SQL and Python, including experience with common libraries such as
      Pandas, NumPy, Scikit-learn, Matplotlib, and Statsmodels.
      β€’ Basic understanding of supervised learning, experimentation, causal inference, and concepts in
      reinforcement learning and multi-armed bandits.
      β€’ Foundational knowledge of probability, statistics, and linear algebra.
      β€’ Working knowledge of Git, including version control and collaboration through pull requests and
      code reviews.
      β€’ Ability to write good documentation and ability to explain analysis results clearly to technical
      and non-technical audiences
      β€’ Familiarity with deploying machine learning models in production cloud environments (GCP or
      AWS).
       

      Essential core skills:

    • Communication
    • Collaboration
    • Organisation
    • Delivering Results
    • Solutions Focused
    • Adaptability
    More
  • Β· 15 views Β· 1 application Β· 14d

    Data Architect

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    PwC is a global network of more than 370,000 professionals in 149 countries that turns challenges into opportunities. We create innovative solutions in audit, consulting, tax and technology, combining knowledge from all over the world. PwC SDC Lviv,...

    PwC is a global network of more than 370,000 professionals in 149 countries that turns challenges into opportunities. We create innovative solutions in audit, consulting, tax and technology, combining knowledge from all over the world.

     

    PwC SDC Lviv, opened in 2018, is part of this global space. It is a place where technology is combined with team spirit, and ambitious ideas find their embodiment in real projects for Central and Eastern Europe.

     

    What do we guarantee?

    • Work format: Remote or in a comfortable office in Lviv - you choose.
    • Development: Personal development plan, mentoring, English and Polish language courses.
    • Stability: Official employment from day one, annual review of salary and career prospects.
    • Corporate culture: Events that unite the team and a space where everyone can be themselves.

     

    Join us as a Data Architect / Lead and play a key role in shaping the data foundation behind our next generation of analytics and AI solutions. In this position, you’ll define the architecture vision for our modern data ecosystem, guide a growing team of Data Engineers, and build cloud‑native platforms that unlock enterprise‑wide insights. This is a high‑impact role where you’ll work closely with business and technology leaders, influence strategic decisions, and drive the adoption of advanced analytics and Generative AI across the organization.

     

    What You’ll Do:

     

    Lead & Strategize

    • Lead and mentor Data Engineers, fostering innovation and continuous improvement.
    • Own the data architecture vision aligned with business goals.
    • Partner with executives and stakeholders to turn strategic needs into scalable data solutions.

    Architect & Build

    • Design and optimize modern data platforms using Azure Data Lake, Databricks, SQL Server, Microsoft Fabric, and NoSQL.
    • Build robust, scalable data pipelines across diverse data sources.
    • Implement strong data quality, governance, and security practices.
    • Support advanced analytics, machine learning, and AI-based solutions.

    Enable Insights

    • Build solutions that deliver accurate, timely insights for decision-makers.
    • Collaborate on Power BI dashboards and executive reporting.
    • Integrate Generative AI into insight-generation workflows.

     

    Requirements:

     

    • Bachelor’s degree in Computer Science, Data Engineering, or related field.
    • 4+ years in data engineering, architecture, or analytics roles.
    • Experience leading Data Engineering teams in enterprise settings will be a plus.
    • Strong skills in Azure data services, Databricks, SQL, NoSQL, and Microsoft Fabric.
    • Hands-on Power BI and enterprise reporting experience.
    • Proven ability to build data pipelines and enforce data quality.
    • Excellent communication skills, especially with executive stakeholders.
    • Relevant certifications (Azure Data Engineer, Databricks, Fabric Analytics, etc.) are a plus.

     

     

    Policy statements:
    https://www.pwc.com/ua/uk/about/privacy.html

    More
  • Β· 10 views Β· 1 application Β· 14d

    Data Architect (AWS) (IRC286424)

    Full Remote Β· Croatia, Poland, Romania, Slovakia, Ukraine Β· 10 years of experience Β· English - B2
    Description The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The company’s medical devices are used in a variety of interventional medical specialties, including...

    Description

    The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The company’s medical devices are used in a variety of interventional medical specialties, including interventional cardiology, peripheral interventions, vascular surgery, electrophysiology, neurovascular intervention, oncology, endoscopy, urology, gynecology, and neuromodulation.
    The client’s mission is to improve the quality of patient care and the productivity of health care delivery through the development and advocacy of less-invasive medical devices and procedures. This is accomplished through the continuing refinement of existing products and procedures and the investigation and development of new technologies that can reduce risk, trauma, cost, procedure time and the need for aftercare.

     

    Requirements

    Boston Scientific is seeking a highly motivated R&D Data Engineer to support our R&D team in data management and development of complex electro-mechanical medical device systems. In this role you will use your technical and collaboration skills alongside your passion for data, innovation, and continuous improvement to help drive our product development forward.

    β€’ Design a systems level architecture for clinical, device, and imaging data and pipelines to support machine learning & classical algorithm development throughout the product lifecycle.
    β€’ Ensure architecture supports high-throughput image ingestion, indexing, and retrieval.
    β€’ Advance conceptual, logical, and physical data models for structured, semi-structured, and unstructured data.
    β€’ Help define and document data standards and definitions.
    β€’ Implement governance frameworks that enforce healthcare and data regulations to data architecture (HIPAA, FDA Part 11, GDPR, etc.).
    β€’ Performs strategic validation tasks of data management tools and platforms
    β€’ Collaborate closely with data scientists, cloud data engineers, algorithm engineers, clinical engineers, software engineers and systems engineers locally and globally.
    β€’ Investigate, research, and recommend appropriate software designs, machine learning operations, tools for dataset organization, controls, and traceability.
    β€’ In all actions, lead with integrity and demonstrate a primary commitment to patient safety and product quality by maintaining compliance to all documented quality processes and procedures.

     

    Job responsibilities

    Required Qualifications
    β€’ Bachelor’s degree or higher in Computer Science, Software Engineering, Data Science, Biomedical Engineering or related field
    β€’ 6+ Years of relevant work experience with Bachelor’s degree
    β€’ 3+ Years of relevant work experience with Masters or PhD
    β€’ 4+ years of consistent coding in Python
    β€’ Strong understanding and use of relational databases and clinical data models
    β€’ Experience working with medical imaging data (DICOM) computer vision algorithms and tools
    β€’ Experience with AWS and cloud technologies and AWS DevOps tools
    β€’ Experience with creating and managing CI/CD pipelines in AWS
    β€’ Experience with Infrastructure as Code (IaC) using Terraform, CloudFormation or AWS CDK
    β€’ Excellent organizational, communication, and collaboration skills
    β€’ Foundational knowledge in machine learning (ML) operations and imaging ML pipelines

    Preferred Qualifications
    β€’ Experience with software validation in a regulated industry
    β€’ Experience with Cloud imaging tools (ex. AWS Health Imaging, Azure Health Data Services)
    β€’ Working knowledge of data de-identification/pseudonymization methods
    β€’ Manipulating tabular metadata using SQL and Python’s Pandas library
    β€’ Experience with the Atlassian Tool Chain
    β€’ Data and annotation version control tools and processes
    β€’ Knowledge of HIPAA, FDA regulations (21 CFR Part 11), GDPR for medical device data governance.

    More
  • Β· 12 views Β· 0 applications Β· 14d

    Data Architect (Azure Platform) IRC279265

    Full Remote Β· Ukraine, Poland, Romania, Croatia, Slovakia Β· 10 years of experience Β· English - B2
    Description As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term...

    Description

    As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term strategic goals. Your decisions will form the technical foundation upon which the entire platform is built, from initial batch processing to future real-time streaming capabilities.

     

    Requirements

    Required Skills (Must-Haves)

    – Cloud Architecture: Extensive experience designing and implementing large-scale data platforms on Microsoft Azure.
    – Expert Technical Knowledge: Deep, expert-level understanding of the Azure data stack, including ADF, Databricks, ADLS, Synapse, and Purview.
    – Data Concepts: Mastery of data warehousing, data modeling (star schemas), data lakes, and both batch and streaming architectural patterns.
    – Strategic Thinking: Ability to align technical solutions with long-term business strategy.

    Nice-to-Have Skills:

    – Hands-on Coding Ability: Proficiency in Python/PySpark, allowing for the creation of architectural proofs-of-concept.
    – DevOps & IaC Acumen: Deep understanding of CI/CD for data platforms and experience with Infrastructure as Code (Bicep/Terraform)/Experience with AzureDevOps for BigData services
    – Azure Cost Management: Experience with FinOps and optimizing the cost of Azure data services.

     

    Job responsibilities

    – End-to-End Architecture Design: Design and document the complete, end-to-end data architecture, encompassing data ingestion, processing, storage, and analytics serving layers.
    – Technology Selection & Strategy: Make strategic decisions on the use of Azure services (ADF, Databricks, Synapse, Event Hubs) to meet both immediate MVP needs and future scalability requirements.
    – Define Standards & Best Practices: Establish data modeling standards, development best practices, and governance policies for the engineering team to follow.
    – Technical Leadership: Provide expert technical guidance and mentorship to the data engineers and BI developers, helping them solve the most complex technical challenges.
    – Stakeholder Communication: Clearly articulate the architectural vision, benefits, and trade-offs to technical teams, project managers, and senior business leaders.

    More
  • Β· 19 views Β· 0 applications Β· 13d

    ML Architect / Principal Data Scientist, Healthcare Business Unit (EMEA)

    Full Remote Β· Poland, Romania, Slovakia, Croatia Β· 7 years of experience Β· English - C1
    Job Description Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Applied Mathematics, or related field. 7+ years of experience in data science or machine learning roles, ideally with exposure to healthcare projects. Strong...

    Job Description

    • Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Applied Mathematics, or related field.
    • 7+ years of experience in data science or machine learning roles, ideally with exposure to healthcare projects.
    • Strong knowledge of ML frameworks such as scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM.
    • Proficiency in Python for data science and related libraries (NumPy, pandas, matplotlib, seaborn, etc.).
    • Experience working with large datasets and data processing frameworks (e.g., Spark, Dask, SQL).
    • Understanding of MLOps concepts and tools (e.g., MLflow, Kubeflow, Vertex AI, Azure ML).
    • Familiarity with cloud environments (Azure, AWS, or GCP) for training and deploying models.
    • Experience with model interpretability, fairness, and explainability techniques.
    • Strong communication and visualization skills for storytelling with data.
    • English proficiency at Upper-Intermediate level or higher.


    Preferred Qualifications (Nice to Have)

    • Experience working with medical data (EHR, imaging, wearables, clinical trials, etc.).
    • Familiarity with healthcare regulations related to data and AI (e.g., HIPAA, GDPR, FDA AI/ML guidelines).
    • Knowledge of FHIR, HL7, or other healthcare interoperability standards.
    • Practical experience with deep learning models (e.g., CNNs for imaging, transformers for NLP).
    • Involvement in presales, proposal writing, or technical advisory work.

    Job Responsibilities

    • Lead the design and development of AI/ML solutions across HealthTech and MedTech projects.
    • Participate in technical presales by analyzing business cases and identifying opportunities for AI/ML application.
    • Build and validate predictive models, classification systems, NLP workflows, and optimization algorithms.
    • Collaborate with software engineers, cloud architects, and QA to integrate models into scalable production systems.
    • Define and guide data acquisition, preprocessing, labeling, and augmentation strategies.
    • Contribute to the development of GlobalLogic’s healthcare-focused AI accelerators and reusable components.
    • Present technical solutions to clients, both business and technical audiences.
    • Support model monitoring, drift detection, and retraining pipelines in deployed systems.
    • Ensure adherence to privacy, security, and compliance standards for data and AI usage.
    • Author clear documentation and contribute to knowledge sharing within the Architects Team.

    Department/Project Description

    Join GlobalLogic’s Architects Team within the Healthcare Business Unit, supporting clients across the EMEA region. This strategic role focuses on engaging new clients, solving real-world healthcare challenges, and launching data-driven AI/ML projects. You will work closely with clients and internal stakeholders to translate complex business needs into impactful data science solutions. If you’re passionate about applying data science in a meaningful domain and want to shape the future of healthcare, we’d love to hear from you.

    More
  • Β· 80 views Β· 30 applications Β· 12d

    Data Scientist

    Full Remote Β· Worldwide Β· Product Β· 3 years of experience Β· English - B1
    Almus is looking for a Data Scientist to join our Analytics team and build production-grade machine learning models that directly impact marketing and business performance. You will work on end-to-end ML solutions, from data and features to deployment and...

    Almus is looking for a Data Scientist to join our Analytics team and build production-grade machine learning models that directly impact marketing and business performance.

    You will work on end-to-end ML solutions, from data and features to deployment and monitoring, focusing on improving LTV prediction quality, optimizing ML-driven costs, and driving key metrics such as LTV, ROAS, retention, and CAC. This is an individual contributor role with strong ownership, close collaboration with Marketing, Product, and Data teams, and a clear focus on real business impact.

    Apply to join Almus and take ownership of high-impact data initiatives!

     

    Responsibilities

    • Design, develop, and deploy machine learning models to production
    • Improve product and business decision-making through data-driven approaches
    • Build and evolve end-to-end ML pipelines (data β†’ features β†’ model β†’ inference β†’ monitoring)
    • Drive measurable impact on key product and commercial metrics
    • Standardize ML approaches within the team (best practices, documentation, reproducibility)
    • Provide technical input to the architecture of analytics and ML infrastructure
    • Develop and deploy models that drive growth in LTV, ROAS, retention, and CAC
    • Influence performance and lifecycle marketing strategy
    • Act as a domain expert and collaborate closely with Marketing, Product, and Data Engineering teams

     

    What We Look For

    • 3+ years of experience as a Data Scientist / ML Engineer
    • Experience working with mobile subscription-based products
    • Strong Python skills (production-level code)
    • Solid knowledge of classical machine learning algorithms and practical experience applying them
    • Experience with feature engineering, model evaluation, and bias–variance trade-offs
    • Hands-on experience with marketing models such as LTV, churn, cohort, and funnel modeling
    • Experience with attribution, incrementality, and uplift modeling
    • Strong SQL skills and experience working with analytical datasets
    • Experience with production ML systems and A/B testing
    • English level: Intermediate+

       

    Nice to have

    • Experience with BigQuery
    • MLOps experience (Docker, CI/CD, model registres)
    • Experience working with performance marketing data (Meta, Google Ads, Adjust)
    • Knowledge of causal inference
    • Experience with AutoML and Bayesian models

       

    We Offer

    • Exciting challenges and growth prospects together with an international company
    • High decision-making speed and diverse projects
    • Flexibility in approaches, no processes for the sake of processes
    • Effective and friendly communication at any level
    • Highly competitive compensation package that recognizes your expertise and experience, Performance Review practice to exchange feedback and discuss terms of cooperation
    • Flexible schedule, opportunity to work in a stylish and comfortable office or remotely
    • Respect for work-life balance (holidays, sick days - of course)
    • Bright corporate events and gifts for employees
    • Additional medical insurance
    • Compensation for specialized training and conference attendance
    • Restaurant lunches at the company's expense for those working in the office, endless supplies of delicious food all year round
       
    More
Log In or Sign Up to see all posted jobs