Jobs Data Science

70
  • Β· 48 views Β· 3 applications Β· 15d

    Data Engineer/Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    We’re looking for a highly skilled Data Expert! Product | Remote About the role ​​We’re looking for a data engineer/scientist who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth,...

    πŸ”₯ We’re looking for a highly skilled Data Expert!πŸ”₯

     

    Product | Remote

     

    About the role 

    ​​We’re looking for a data engineer/scientist who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions.

     

    This role combines data governance, analytics, and development. You’ll build reliable data pipelines, improve observability, and uncover meaningful patterns that guide how we grow and evolve.

     

    You’ll work closely with product and technical teams to analyze user behavior, run experiments, build predictive models, and turn complex findings into actionable recommendations. You’ll also design and support systems for collecting, transforming, and analyzing data across our stack.

     

    What you’ll do 

    • Analyze product and user behavior to uncover trends, bottlenecks, and opportunities.
    • Design and evaluate experiments (A/B tests) to guide product and growth decisions.
    • Build and maintain data pipelines, ETL processes, and dashboards for analytics and reporting.
    • Develop and validate statistical and machine learning models for prediction, segmentation, and forecasting.
    • Design and optimize data models for new features and analytics (e.g., using dbt).
    • Work with event-driven architectures and standards like AsyncAPI and CloudEvents.
    • Collaborate with engineers to improve data quality, consistency, and governance across systems.
    • Use observability and tracing tools (e.g., OpenTelemetry) to monitor and improve performance.
    • Create visualizations and reports that clearly communicate results to technical and non-technical audiences.
    • Support existing frontend and backend systems related to analytics and data processing.
    • Champion experimentation, measurement, and data-driven decision-making across teams.

     

    You're a great fit if you have 

    • 5+ years of software engineering experience, with 3+ years focused on data science or analytics.
    • Strong SQL skills and experience with data modeling (dbt preferred).
    • Solid understanding of statistics, hypothesis testing, and experimental design.
    • Proven experience in data governance, analytics, and backend systems.
    • Familiarity with columnar databases or analytics engines (ClickHouse, Postgres, etc.).
    • Experience with modern data visualization tools.
    • Strong analytical mindset, attention to detail, and clear communication.
    • Passionate about clarity, simplicity, and quality in both data and code.
    • English proficiency: Upper-Intermediate or higher.

     

    Nice to have

    • Understanding of product analytics and behavioral data.
    • Experience with causal inference or time-series modeling.
    • Strong proficiency with Node.js, React, JavaScript, and TypeScript.
    • Experience with frontend or backend performance optimization.
    • Familiarity with Git-based workflows and CI/CD for data pipelines.
       

    How you’ll know you’re doing a great job

    • Teams make better product decisions, faster, because of your insights.
    • Data pipelines are trusted, observable, and performant.
    • Experiments drive measurable product and business outcomes.
    • Metrics and dashboards are used across teams β€” not just built once.
    • You’re the go-to person for clarity when questions arise about β€œwhat the data says.”

     

    About Redocly

    Redocly builds tools that accelerate API ubiquity. Our platform helps teams create world-class developer experiences β€” from API documentation and catalogs to internal developer hubs and public showcases. We're a globally distributed team that values clarity, autonomy, and craftsmanship. You'll work alongside people who love developer experience, storytelling, and building tools that make technical work simpler and more joyful.

    Headquarter – Austin, Texas, US. There is also an office in Lviv, Ukraine.

     

    Redocly is trusted by leading tech, fintech, telecom, and enterprise teams to power API documentation and developer portals. Redocly’s clients range from startups to Fortune 500 enterprises.

    https://redocly.com/

     

    Working with Redocly

    • Team: 4-6 people (middle-seniors)
    • Team’s location: Ukraine&Europe
    • There are functional, product, and platform teams and each has its own ownership, and line structure, and teams themselves decide when to have weekly meetings.
    • Cross-functional teams are formed for each two-month cycle, giving team members the opportunity to work across all parts of the product.
    • Methodology: Shape Up

     

    Perks

    • Competitive salary based on your expertise (approximately $6,000 - $6,500 per month)
    • Full remote, though you’re welcome to come to the office occasionally if you wish.
    • Cooperation on a B2B basis with a US-based company (for EU citizens) or under a gig contract (for Ukraine).
    • After a year of working with the company, you can buy a certain number of company’s shares
    • Around 30 days of vacation (unlimited,  but let’s keep it reasonable)
    • 10 working days of sick leave per year
    • Public holidays according to the standards
    • No trackers and screen recorders
    • Working hours – EU/UA timezone. Working day – 8 hours. Mostly they start working from 10-11 am
    • Equipment provided – MacBooks (M1 – M4)
    • Regular performance reviews

     

    Hiring Stages

    • Prescreening (30-45 min)
    • HR Call (45 min)
    • Initial Interview (30 min)
    • Trial Day (paid)
    • Offer

     

    If you are an experienced Data Scientist, and you want to work on impactful data-driven projects, we’d love to hear from you! 


    Apply now to join our team!

     

    More
  • Β· 42 views Β· 2 applications Β· 5d

    Machine Learning Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - None Ukrainian Product πŸ‡ΊπŸ‡¦
    Ready to level up your career? Our innovative BI unit is looking for a passionate Machine Learning Engineer to join the team. If you thrive on working with cutting-edge technologies and are eager to enhance our systems, we want to hear from you. Join us...

    Ready to level up your career?

    Our innovative BI unit is looking for a passionate Machine Learning Engineer to join the team. If you thrive on working with cutting-edge technologies and are eager to enhance our systems, we want to hear from you. Join us to drive impactful projects in a dynamic and forward-thinking environment.

     

    Your influential mission. You will...

    • Develop production data science applications in Python, end to end.
    • Design and develop new AI services from scratch.
    • Maintain and enhance current data science pipelines in complex high-load applications.
    • Collaborate closely with R&D and DevOps teams.
    • Research, invent and adapt machine learning algorithms for dedicated business needs.
    • Perform predictive and statistical modelling.
    • Perform ad-hoc analyses as required.

     

    Components for success. You...

    • Bring a minimum of 3 years of experience as a Python developer (including familiarity with PyCharm, git, and debugging).
    • Possess a minimum of 3 years of experience in predictive modeling, preferably within data science production services.
    • Have at least 3 years of experience in tabular or time series data analysis in a data developer, data scientist, or data analyst role.
    • Demonstrate proficiency in SQL for data manipulation and visualization.
    • Are a strong team player, with excellent communication skills, capable of working collaboratively in a remote environment.
    • Have a high level of English proficiency.
    • Hold a Bachelor's degree in Engineering, Economics, Mathematics, Statistics, or similar.

     

    You'll get extra points for...

    • Experience in REST API and cloud deployment.
    • Knowledge of AWS or GCP.
    • Master's degree in Engineering, Economics, Mathematics, Statistics, or similar.
    • Having experience in direct communication with business.

     

    Thrive in a culture that values...

    • Engaged User Experiences - We develop personalized recommendations and churn mitigation strategies to keep users coming back for more.
    • Data-Driven Decision Making -  We generate insightful reports to empower informed decisions across the organization.
    • Cutting-Edge Technologies -  We're constantly exploring and integrating new technologies to enhance our capabilities.
    • Responsible Development - We prioritize responsible user experiences through innovative solutions.

     

    AI Unit
    We're an innovative team designing and developing AI-powered services to fit complex business needs in the constantly evolving gaming market. We're passionate about pushing the limits and creating unique solutions with a big positive impact.

    Join us and be a part of the future!

    More
  • Β· 22 views Β· 0 applications Β· 15d

    Senior/Lead Machine Learning Engineer

    Full Remote Β· Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About The Role As a Senior/Lead Software Engineer in our Machine Learning department, you will take ownership of high-impact initiatives that drive product quality, monetization, and user experience. You’ll apply your expertise in Python, data analysis,...

    About The Role

    As a Senior/Lead Software Engineer in our Machine Learning department, you will take ownership of high-impact initiatives that drive product quality, monetization, and user experience. You’ll apply your expertise in Python, data analysis, and modern AI tools to build, deploy, and optimize ML solutions end-to-end. This is a hands-on technical role that blends data science and engineering to deliver measurable business outcomes. You’ll collaborate closely with cross-functional stakeholders and contribute to evolving areas such as LLMs, Generative AI, and ML infrastructure. The role offers exposure to complex data challenges and the opportunity to shape Pearl’s AI-driven capabilities.



    What You’ll Do

    • Build, train, and deploy ML models that improve product performance and monetization.
    • Collaborate with data, product, and engineering teams across multiple internal initiatives.
    • Design and implement data pipelines for extraction, transformation, and analysis.
    • Apply statistical and machine learning techniques to real-world business problems.
    • Work on projects involving LLMs and Generative AI technologies.
    • Analyze large datasets to identify trends, insights, and optimization opportunities.
    • Integrate ML models into production systems and monitor their performance.
    • Communicate findings and recommendations clearly to technical and non-technical stakeholders.
    • Experiment with emerging AI tools and methodologies to enhance existing workflows.
    • Ensure data quality, reliability, and scalability in all deliverables.



    What We’re Looking For

    • 3+ years of hands-on experience in ML Engineering or Data Science.
    • 5-6+ years total experience in software or data engineering.
    • Strong programming skills in Python.
    • Proficiency in SQL for data querying and analysis.
    • Solid understanding of data processing, analysis, and visualization.
    • Experience with LLMs / Generative AI tools (e.g., OpenAI, Copilot, Cursor).
    • Ability to build and deploy ML models end-to-end.
    • Strong analytical and problem-solving mindset.
    • Experience with .NET for integration work
    • Upper-intermediate English proficiency (B2 or higher).
    • Familiarity with Databricks or similar data platforms (nice-to-have).
    • Exposure to AWS or other cloud-based ML environments (nice-to-have).



    Why Join Us?

    • High-Impact Data β€” Work with vast datasets generated by over a million daily visits, uncovering valuable insights.
    • Collaborative Analytics Team β€” Join a team of 30+ skilled analysts who actively share knowledge and support each other.
    • Cutting-Edge Tools & Techniques β€” Get hands-on experience with ML, Bayesian methods, and LLMs to solve real business challenges.
    • Data-Driven Culture β€” Thrive in an environment where data informs every decision and drives continuous improvement.
    More
  • Β· 39 views Β· 3 applications Β· 19d

    Computer Vision Engineer (3D Reconstruction)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· English - None
    Computer Vision Engineer (3D Reconstruction) We are seeking a Computer Vision Engineer to develop a high-precision 3-D reconstruction and measurement systems utilizing Shape from Polarization ( SfP). In this role, you will bridge the gap between...

     Computer Vision Engineer (3D Reconstruction)

     

    We are seeking a Computer Vision Engineer to develop a high-precision 3-D reconstruction and measurement systems utilizing Shape from Polarization ( SfP). In this role, you will bridge the gap between optical physics and geometry to extract submillimeter surface details from polarimetric data.

     

    Crucial Skills (Must-Have Foundation)


    ● Solid understanding of the nature of polarized light and the use of the Stokes vector to
    describe light states.
    ● Strong proficiency in linear algebra and vector calculus, particularly for coordinate
    system transformations, 3D geometry, and extracting signals from noisy data.
    ● Fluency in Python (NumPy, SciPy) for data analysis and algorithm implementation.
    ● Experience with standard computer vision libraries for image filtering, alignment, and
    feature detection, such as OpenCV and scikit-image.


    Useful & Desirable Skills (Nice-to-Have)


    ● Familiarity with surface normals, depth maps, and Shape-from-X techniques (e.g.,
    shading, stereo, or motion).
    ● Experience working with polarization cameras.
    ● Deep understanding of light reflection and scattering across different materials and
    surface textures.
    ● Experience with normal integration approaches for 3D reconstruction to recover continuous surfaces.
     

    More
  • Β· 39 views Β· 2 applications Β· 19d

    Data Scientist (IQOS)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - None
    Are you passionate about turning data into powerful consumer experiences? At PMI, we’re transforming our business and shaping a smoke-free future β€” and data is at the heart of that journey. We’re looking for a Data Scientist who will be responsible for...

    Are you passionate about turning data into powerful consumer experiences? At PMI, we’re transforming our business and shaping a smoke-free future β€” and data is at the heart of that journey. We’re looking for a Data Scientist who will be responsible for developing ML & AI-powered solutions that drive personalization, audience targeting, and strategic decision-making. If you thrive in a fast-paced, collaborative environment and want to work with cutting-edge technologies like LLMs and generative AI, this is your opportunity to make a real impact.
     

    What You’ll Do:
     

    • Design Smart Audience Strategies
       Segment and profile consumer databases to uncover high-value audiences. Build data-driven frameworks for targeting and personalization that align with business goals and drive measurable impact.
    • Build Scalable ML & AI Solutions
       Develop and optimize large-scale data pipelines using advanced machine learning and AI. Create predictive models for segmentation, personalization, and behavioral forecastingβ€”continuously improving accuracy and performance.
    • Operationalize Insights Across Channels
       Integrate analytics and predictive models into CRM, marketing automation, and digital platforms. Collaborate with cross-functional teams to enable real-time personalization and optimize campaign performance.
    • Lead External Data Acquisition & Partnerships
       Source and manage 2nd/3rd party data from retailers, telcos, digital platforms, and research providers. Own the full processβ€”from defining technical requirements and templates to budget oversight and vendor negotiations.
    • Drive Innovation with Emerging AI Technologies
       Explore and implement cutting-edge AI tools like LLMs, STTs, and generative AI. Prototype and scale solutions for content generation, sentiment analysis, and conversational automationβ€”always with a focus on ethical AI and business value.
    • Be a Thought Leader & Collaborator
       Act as a go-to expert in analytics and AI. Mentor teams, manage external partners, and champion innovation across the organization.

       

    Who Are We Looking For?
     

    Hard Skills

    • Higher Education in Computer Science, System Analysis, Informatics, Mathematics.
    • 2+ years of experience translating complex business problems into actionable insights through hands-on analytics.
    • 1.5+ years of experience in data science, machine learning, and predictive modeling.
    • Strong SQL and Python skills (e.g., NumPy, Pandas, scikit-learn, Matplotlib).
    • Experience with BI tools like Power BI, Looker or Tableau.
    • Solid understanding of the ML lifecycle β€” from EDA to production deployment.
    • Experience with LLMs, NLP, STT, or audio analytics is a strong plus.
    • English: Upper-Intermediate or higher.

    Soft Skills

    • High level of autonomy, ownership, and accountability.
    • Excellent communication skills β€” able to explain complex ideas clearly.
    • Passion for solving meaningful problems and improving consumer experience.
    • Curiosity and drive to stay ahead in the fast-evolving analytics and AI landscape.
       

    What We Offer
     

    Our success depends on our talented employees who come to work here every single day with a sense of purpose and an appetite for progress. Join PMI and you too can:

    • Seize the freedom to define your future and ours. We’ll empower you to take risks, experiment and explore.
    • Be part of an inclusive, diverse culture, where everyone’s contribution is respected; collaborate with some of the world’s best people and feel like you belong.
    • Pursue your ambitions and develop your skills with a global business – our staggering size and scale provides endless opportunities to progress.
    • Take pride in delivering our promise to society: to deliver a smoke-free future.
    More
  • Β· 73 views Β· 7 applications Β· 15d

    Data Science Engineer

    Spain, Poland, Portugal, Ukraine Β· 1 year of experience Β· English - B2
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions; We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems. 

    We advance emerging technologies for outside-the-box solutions; We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

     

    About the position

    Quantum expands the team in Central Europe and has brilliant opportunities for Data Science Engineers. 

    If you are interested in working on areas related to Data Analysis, fintech, image processing, and solving real-world challenges with innovative technologies, apply for the vacancy below. 

     

    Must have skills:

    • At least 1,5 years of commercial experience as a Data Science Engineer
    • Strong knowledge of linear algebra, calculus, statistics, and probability theory
    • Knowledge and experience with algorithms and data structures
    • Strong experience with Machine Learning
    • Expertise in areas of Computer Vision or Natural Language Processing
    • Knowledge of modern Neural Networks architectures (DNN, CNN, LSTM, etc.)
    • Experience with at least one of the Deep Learning frameworks (Tensorflow, PyTorch)
    • Experience with SQL
    • Strong knowledge of OOP
    • At least an Upper-Intermediate level of English (spoken and written)

     

    Nice to have skills:

    • Experience with production ML/DL frameworks (OpenVino, TensorRT, etc.)
    • Docker practical experience
    • Experience with Cloud Computing Platforms (AWS, GCloud, Azure)
    • Participation in Kaggle competitions

     

    Your tasks will include:

    • Full-cycle data science projects
    • Data analysis and data preparation
    • Development of Machine Learning / Computer Vision / Deep Learning / NLP solutions; Developing models and deploying them to production
    • Sometimes, this will require the ability to implement methods from scientific papers and apply them to new domains

     

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

     

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 160 views Β· 8 applications Β· 30d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· English - B1 Ukrainian Product πŸ‡ΊπŸ‡¦
    Hello! We are E-Com, a team of Foodtech and Ukrainian product lovers. And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming. What we are...

    Hello!

     

    We are E-Com, a team of Foodtech and Ukrainian product lovers.

    And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming.

     

    What we are currently working on:

    • we are upgrading the existing delivery of a wide range of products from Silpo stores;
    • we are developing super-fast delivery of products and dishes under the new LOKO brand.

     

    We are developing a next-generation Decision Support Platform that connects demand  planning, operational orchestration, and in-store execution optimization into one unified Analytics and  Machine Learning Ecosystem. 

     

    The project focuses on three major streams: 

    β€’  Demand & Forecasting Intelligence: building short-term demand forecasting models, generating  granular demand signals for operational planning, identifying anomalies, and supporting commercial  decision logic across virtual warehouse clusters. 

    β€’  Operational Orchestration & Task Optimization: designing predictive models for workload  estimation, task duration (ETA), and prioritization. Developing algorithms that automatically map  operational needs into structured tasks and optimize their sequencing and allocation across teams. 

    β€’  In-Store Execution & Routing Optimization: developing models that optimize picker movement,  predict in-store congestion, and recommend optimal routes and execution flows. Integrating store  layout geometry, product characteristics, and operational constraints to enhance dark-store  efficiency. 

     

    You will join a cross-functional team to design and implement data-driven decision module that directly  influence commercial and operational decisions. 

     

    Responsibilities:

    β€’  develop and maintain ML models for forecasting short-term demand signals and detecting anomalies  across virtual warehouse clusters;

    β€’  build predictive models to estimate task workload, execution times (ETA), and expected operational  performance;

    β€’  design algorithms to optimize task distribution, sequencing, and prioritization across operational  teams;

    β€’  develop routing and path-optimization models to improve picker movement efficiency within dark  stores; 

    β€’  construct data-driven decision modules that integrate commercial rules, operational constraints, and  geometric layouts;

    β€’  translate business requirements into ML-supported decision flows and automate key parts of  operational logic; 

    β€’  build SQL pipelines and data transformations for commercial, operations, and logistics datasets;

    β€’  work closely with supply chain, dark store operations, category management, and IT to deliver  measurable improvements;

    β€’  conduct A/B testing, validate model impact, and ensure high-quality model monitoring. 

     

    Requirements:

    β€’  bachelor’s Degree in Mathematics / Quantitative Economics / Econometrics / Statistics / Computer  Sciences / Finance; 

    β€’  at least 2 years working experience on Data Science; 

    β€’  strong mathematical background in Linear algebra, Probability, Statistics & Optimization Techniques; 

    β€’  proven experience with SQL (Window functions, CTEs, joins) and Python;

    β€’  expertise in Machine Learning, Time Series Analysis and application of Statistical Concepts  (Hypothesis testing, A/B tests, PCA); 

    β€’  ability to work independently and decompose complex problems. 

     

    Preferred:

    β€’  experience with Airflow, Docker, or Kubernetes for Data Orchestration; 

    β€’  practical experience with Amazon SageMaker: training, deploying, and monitoring ML models in a  production environment; 

    β€’  knowledge of Reporting and Business Intelligence Software (Power BI, Tableau, Looker); 

    β€’  ability to design and deliver packaged analytical/ML solutions. 

     

    What we offer

    • competitive salary;
    • opportunity to work on flagship projects impacting millions of users;
    • flexible remote or office-based work (with backup power and reliable connectivity at SilverBreeze Business Center);
    • flexible working schedule;
    • medical and Life insurance packages;
    • support for GIG contract or private entrepreneurship arrangements;
    • discounts at Fozzy Group stores and restaurants;
    • psychological support services;
    • Caring corporate culture;
    • a team where you can implement your ideas, experiment, and feel like you are among friends.
    More
  • Β· 52 views Β· 11 applications Β· 27d

    Data Scientist – Autonomous Systems

    Worldwide Β· Product Β· 3 years of experience Β· English - None MilTech πŸͺ–
    We are seeking a Data Scientist with a strong foundation in physics, control theory, and mathematical modeling to join our team working on cutting-edge autonomous systems. The ideal candidate combines analytical rigor with practical experience in...

    We are seeking a Data Scientist with a strong foundation in physics, control theory, and mathematical modeling to join our team working on cutting-edge autonomous systems. The ideal candidate combines analytical rigor with practical experience in modeling, simulation, and algorithm development for autonomous platforms.

    Levels: Middle and Senior (responsibilities and scope will be adjusted accordingly).

    Key Responsibilities

    • Develop and validate mathematical models for autonomous systems and dynamic environments.
    • Apply data-driven approaches for system identification, optimization, and predictive control.
    • Analyze large datasets from sensors and simulations to extract insights and improve system performance.
    • Design and implement algorithms for control, navigation, and decision-making.
    • Collaborate with cross-functional teams to integrate models into real-world autonomous platforms.

    Required Qualifications

    • 3+ years in R&D or applied data science/software development.
    • Strong background in mathematical modeling, system identification, and control theory.
    • Proficiency in Matlab/Simulink for modeling and simulation.
    • Experience in signal processing and data analysis.
    • Programming skills in Python and C++.
    • Ability to quickly research and apply recent trends in control theory, autonomous systems, and data-driven modeling.
    • Relevant work experience or education in STEM field

    Nice to Have

    • Knowledge of aerodynamics fundamentals.
    • Experience with Machine Learning (e.g., reinforcement learning, predictive modeling).
    • Familiarity with simulation tools such as Gazebo, AirSim.
    • Hands-on experience with SITL/HITL testing.
    • Exposure to flight control stacks like PX4, Betaflight, ArduPilot.
    More
  • Β· 12 views Β· 2 applications Β· 1d

    Senior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Опис: Our platform supports AI-generated creatives and uses contextual signals, consent frameworks, and privacy-compliant data to select and serve ads. We are looking for an experienced Data Scientist to work on Teza’s data-driven optimization layer. You...

    Опис:

    Our platform supports AI-generated creatives and uses contextual signals, consent frameworks, and privacy-compliant data to select and serve ads.
    We are looking for an experienced Data Scientist to work on Teza’s data-driven optimization layer. You will design and develop the intelligence that powers our data-driven decision-making.
     

    Π’ΠΈΠΌΠΎΠ³ΠΈ:

    - BSc/MSc/PhD in Data Science, ML, Statistics, or related field
    - Strong experience in ML modeling, feature engineering, and experimentation
    - Experience in ad tech, marketing tech, or real-time bidding is a must!
    - Experience with predictive modeling and ranking/recommendation systems
    - Experience working with large-scale datasets (BigQuery, Spark, etc.)
    - Strong Python skills
    - Experience with embeddings, text classifiers, and LLM-based scoring
    - Strong SQL skills

     

    Обов'язки:

    - Build bidding AI models
    - Analyze large-scale events data for data to drive decision-making
    - Improve campaign selection, and safety classifier
    - Develop AI metrics for real-time optimization
    - Work closely with engineering, product, and AI teams on core logic

     

    Π‘ΡƒΠ΄Π΅ плюсом:
    - Experience with reinforcement learning or bandits
    - Knowledge of DSP/RTB systems
    - Experience with fraud detection or brand safety
    - Experience deploying ML models in production
    - Experience with campaign optimization in Google/Facebook/RTB

    More
  • Β· 74 views Β· 4 applications Β· 6d

    IT/Data management specialist to $500

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· English - B2
    About Keymakr Keymakr specializes in end-to-end dataset preparation, including video and image annotation and labeling for AI projects. We take a personalized approach, delivering customized solutions tailored to each client’s needs. Our strong...

    About Keymakr

    • Keymakr specializes in end-to-end dataset preparation, including video and image annotation and labeling for AI projects.
      We take a personalized approach, delivering customized solutions tailored to each client’s needs.

       

    • Our strong in-house R&D team develops and adapts annotation tools for every client, allowing us to successfully handle even the most complex requirements.

     

    About the Role:

    • We are looking for a Data Management Specialist to manage, process, and maintain large volumes of data and media assets used in AI and technology projects.

       

    • This role focuses on ensuring data integrity, consistency, and availability while optimizing workflows through automation and close collaboration with cross-functional teams.

       

    Key Responsibilities:

    1. Manage and manipulate data using Linux file systems and native OS tools;
    2. Work with media files, including video formats, codecs, resolution, and frame rate (FPS);
    3. Verify data integrity using checksum and verification tools;
    4. Pack and unpack large data archives efficiently;
    5. Automate data-related workflows using scripting languages;
    6. Prepare, structure, and maintain datasets in structured formats;
    7. Maintain cloud-based data storage and perform data migration;
    8. Collaborate with Project, QA, and Technical teams to ensure smooth data operations;
    9. Clearly and consistently document workflows, processes, and task updates.

     

    Technical Skills:

    1. Strong knowledge of Linux file systems and administration (Ubuntu/Debian, 2+ years);
    2. Experience with Windows OS administration;
    3. Proficiency in Python and Bash / Shell scripting;
    4. Experience with archive management tools for large datasets;
    5. Knowledge of checksum verification tools (md5sum, sha256sum);
    6. Tools for checking video properties, codecs, and formats;
    7. Solid knowledge of JSON and CSV formats;
    8. Experience with cloud storage platforms: AWS S3, Google Cloud Storage, or Azure;
    9. Familiarity with JIRA, Slack, Confluence, and Google Docs.

     

    Nice to Have:

    1. Experience with point clouds / 3D data;
    2. Strong understanding of complex technical specifications;
    3. Experience working with large-scale datasets;
    4. English & Communication;
    5. English level B2 (Upper-Intermediate) or higher;
    6. Ability to read and understand technical documentation and project requirements;
    7. Clear written communication in English (reports, updates, documentation);
    8. Participation in English-language meetings when required.

       

    If you want, I can also:

    • tailor it for remote / hybrid / office format,
    • shorten it to fit DOU’s compact job posting style,
    • or adjust it to a more technical / more business-oriented tone.
    More
  • Β· 38 views Β· 5 applications Β· 18d

    LLM Research Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B2
    We are seeking an experienced Data Scientist with a passion for large language models (LLMs) and cutting-edge AI research. In this role, you will design and prototype data preparation pipelines, collaborating closely with data engineers to transform your...

    We are seeking an experienced Data Scientist with a passion for large language models (LLMs) and cutting-edge AI research. In this role, you will design and prototype data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, design and implement a state-of-the-art evaluation and benchmarking framework to measure and guide model quality, and do end-to-end LLMs training. You will work alongside top AI researchers and engineers, ensuring our models are not only powerful but also aligned with user needs, cultural context, and ethical standards.

     

    What you will do

    • Curation of datasets for pre-training, supervised fine-tuning, and alignment;
    • Research and develop best practices and novel techniques in LLM training and evaluation pipelines;
    • Collaborate closely with data engineers, annotators, linguists, and domain experts to scale data processes, define evaluation tasks and collect high-quality feedback.

     

    Qualifications and experience needed
    Education & Experience:

    • 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP;
    • An advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.

    GenAI & NLP Expertise:

    • Practical experience with fine-tuning LLMs / VLMs models;
    • Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.

    ML & Programming Skills:

    • Strong experience with deep learning frameworks such as PyTorch or JAX for building models;
    • Ability to write efficient, clean code and debug complex model issues.


    A plus would be
    Advanced NLP/ML Techniques:

    • Applied experience using Reinforcement Learning in NLP / LLM settings;
    • Prior work on LLM safety, fairness, and bias mitigation;
    • Experience generating and curating synthetic datasets for Supervised Fine-Tuning (SFT), including quality control and scaling considerations.

    Research & Community:

    • Publications in NLP/ML conferences or contributions to open-source NLP projects;
    • Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicates a passion for staying at the forefront of the field.

    MLOps & Infrastructure:

    • Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow);
    • Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models;
    • Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training is a plus.

    Problem-Solving:

    • Innovative mindset with the ability to approach open-ended AI problems creatively;
    • Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation

     

    What we offer

    • Office or remote β€” it’s up to you. You can work from anywhere, and we will arrange your workplace;
    • Remote onboarding;
    • Performance bonuses;
    • We train employees with the opportunity to learn through the company’s library, internal resources, and programs from partners;
    • Health and life insurance;
    • Wellbeing program and corporate psychologist;
    • Reimbursement of expenses for Kyivstar mobile communication.
    More
  • Β· 35 views Β· 3 applications Β· 15d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - None
    About the company Jappware is a software development company that delivers innovative and reliable digital solutions for international clients. We specialize in end-to-end product development β€” from ideation and design to architecture, development, and...

    About the company

    Jappware is a software development company that delivers innovative and reliable digital
    solutions for international clients.
    We specialize in end-to-end product development β€” from ideation and design to architecture,
    development, and DevOps support.

    About the project
    We are looking for a Senior Data Scientist to join our growing team in Lviv or remotely.
    We’re building a brand-new Real Estate platform with an AI-powered Lead Generation Pipeline
    at its core.
    This is a hands-on role combining Data Science, Analytics, and Data Engineeringβ€”perfect for
    someone who wants to build from scratch and influence product direction.

    Responsibilities
    ● Build the end-to-end Lead Generation Pipeline for Real Estate
    ● Create and manage structured property feature sets
    ● Run EDA, modeling, and hypothesis testing
    ● Design and maintain ETL/ELT pipelines
    ● Work with feature stores (Parquet, etc.)
    ● Collaborate with engineering to integrate models into production
    ● Shape data architecture and validate product ideas through prototypes

    Requirements
    ● 5+ years in Data Science
    ● Strong Python & SQL knowledge
    ● Proven ability to build analytical and data pipelines from scratch
    ● Hands-on, autonomous, proactive mindset
    ● Strong communication and analytical thinking skills

    What we are offering
    ● Challenging and innovative environments.
    ● Flexible schedule and remote-friendly culture.
    ● 20 paid vacations and 15 sick leave days.
    ● Quarterly budget for learning & development activities.
    ● Team events, workshops, and internal tech meetups.
    ● IT Club membership.

    Steps to Expect in Jappware’s Hiring Process:
    ● Intro Interview
    ● Technical Interview
    ● Offer

    Our Mission:

    To build innovative software in trustworthy partnerships.
    We aim to become a reliable and forward-thinking technology partner, helping businesses grow
    through innovation and mutual trust.

    Our Values

    Trust β€” Every successful partnership is built on openness, honesty, and sincerity.
    Openness β€” We encourage people to share ideas freely and foster transparent
    communication.
    Partnership β€” We treat our clients’ and teammates’ goals as our own.
    Proactiveness β€” We act ahead of possible outcomes and anticipate challenges to deliver the
    best results.

    Social Responsibility
    At Jappware, we stand with our people and our country.
    We proudly support Ukraine’s resilience, innovation, and global contribution to the IT
    community.
    Through donations, volunteering, and social initiatives, we help strengthen our local
    communities and the nation’s future.

    Jappware stands with Ukraine β€” Glory to Ukraine!

    Follow us via LinkedIn, DOU, Instagram, Facebook

     

    More
  • Β· 22 views Β· 2 applications Β· 13d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B1
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will design and implement a state-of-the-art evaluation and benchmarking framework to measure and guide model quality, and personally train LLMs with a strong focus on Reinforcement Learning from Human Feedback (RLHF). You will work alongside top AI researchers and engineers, ensuring the models are not only powerful but also aligned with user needs, cultural context, and ethical standards.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in machine learning model evaluation and/or NLP benchmarking.
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Solid understanding of RLHF concepts and related techniques (preference modeling, reward modeling, reinforcement learning).
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience creating and managing test datasets, including annotation and labeling processes.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.

    Nice to have:
    Advanced NLP/ML Techniques:
    - Prior work on LLM safety, fairness, and bias mitigation.
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Knowledge of data annotation workflows and human feedback collection methods.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian benchmarks, or familiarity with other evaluation datasets and leaderboards for large models, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Analyze benchmarking datasets, define gaps, and design, implement, and maintain a comprehensive benchmarking framework for the Ukrainian language.
    - Research and integrate state-of-the-art evaluation metrics for factual accuracy, reasoning, language fluency, safety, and alignment.
    - Design and maintain testing frameworks to detect hallucinations, biases, and other failure modes in LLM outputs.
    - Develop pipelines for synthetic data generation and adversarial example creation to challenge the model’s robustness.
    - Collaborate with human annotators, linguists, and domain experts to define evaluation tasks and collect high-quality feedback
    - Develop tools and processes for continuous evaluation during model pre-training, fine-tuning, and deployment.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Analyze benchmarking results to identify model strengths, weaknesses, and improvement opportunities.
    - Work closely with other data scientists to align training and evaluation pipelines.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 61 views Β· 3 applications Β· 29d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B1
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 13 views Β· 1 application Β· 29d

    Data scientist with Java expertise

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...

    Project Description:

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

     

    Responsibilities:

    We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
    - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
    - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
    - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
    - Integrate ML models and search capabilities into production systems.
    - Evaluate, fine-tune, and monitor search performance metrics.
    - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
    - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

     

    Mandatory Skills Description:

    - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
    - Strong programming experience in both Java and Python (production-level code, not just prototyping).
    - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
    - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
    - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
    - Experience deploying and maintaining ML/search systems in production.
    - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

     

    Nice-to-Have Skills Description:

    - Experience of work in distributed teams, with US customers
    - Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
    - Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
    - Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
    - Experience with MLOps and model monitoring tools.
    - Contributions to open-source search or ML projects.

     

    Languages:

    English: B2 Upper Intermediate

    More
Log In or Sign Up to see all posted jobs