Jobs

119
  • Β· 26 views Β· 1 application Β· 18d

    Senior Data Scientist

    Full Remote Β· Ukraine, Poland, Romania Β· 5 years of experience Β· B2 - Upper Intermediate
    Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company produces...

    Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company produces over 60,000 products, including adhesives, abrasives, laminates, passive fire protection, personal protective equipment, window films, paint protection film, electrical, electronic connecting, insulating materials, car-care products, electronic circuits, and optical films.

     

    We are seeking an experienced and innovative Senior Data Scientist to join our advanced research and development team. In this role, you will be responsible for leading the design, development, and deployment of cutting-edge machine learning models and statistical algorithms to tackle our most complex business problems. You will leverage your deep expertise in statistical computing, NLP, and reinforcement learning to build real-world systems that create significant value. As a senior member of the team, you will also guide technical strategy and champion a culture of rigorous experimentation and data-driven excellence.

     

    Required Qualifications & Skills

    • A Master's or PhD in Statistics, Applied Mathematics, Machine Learning, Computer Science, or a related quantitative field.
    • 5+ years of hands-on experience designing and implementing statistical computing, NLP, and/or reinforcement learning algorithms for real-world systems.
    • Strong theoretical and practical background in statistical computing, including experimental design, fractional factorial and response surface methodology, and multi-objective optimization.
    • Expert-level proficiency in Python and its core data science libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch).
    • Proven experience in synthetic data generation, including stochastic process simulations.
    • Excellent problem-solving abilities with a creative and analytical mindset.
    • Strong communication and collaboration skills, with a proven ability to present complex results to diverse audiences and thrive in a fast-paced R&D environment.

    Preferred Qualifications & Skills

    • Industry experience in Consumer Packaged Goods (CPG) or a related field.
    • Experience contributing to or developing enterprise data stores (e.g., data meshes, lakehouses).
    • Knowledge of MLOps, DevOps methodologies, and CI/CD practices for deploying and managing models in production.
    • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience working with and consuming data from REST APIs.

       

    Key Responsibilities

    • Model Development & Implementation: Lead the end-to-end lifecycle of machine learning projects, from problem formulation and data exploration to designing, building, and deploying advanced statistical, NLP, and/or reinforcement learning models in production environments.
    • Advanced Statistical Analysis: Apply a strong background in statistical computing to design and execute complex experiments (including A/B testing, fractional factorial design, and response surface methodology) to optimize systems and products.
    • Algorithm & Solution Design: Architect and implement novel algorithms for multi-objective optimization and synthetic data generation, including stochastic process simulations, to solve unique business challenges.
    • Technical Leadership & Mentorship: Provide technical guidance and mentorship to junior and mid-level data scientists, fostering their growth through code reviews, knowledge sharing, and collaborative problem-solving.
    • Cross-Functional Collaboration: Work closely with product managers, engineers, and business stakeholders to identify opportunities, define project requirements, and translate complex scientific concepts into actionable business insights.
    • Research & Innovation: Stay at the forefront of the machine learning and data science fields, continuously researching new techniques and technologies to drive innovation and maintain our competitive edge.
    More
  • Β· 20 views Β· 3 applications Β· 17d

    Senior Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    About the Role We’re looking for a Senior Machine Learning Engineer to join the team that designs, builds, and scales ML systems powering personalization and customer engagement at a massive scale. You’ll work on production-ready ML pipelines, real-time...

    About the Role

    We’re looking for a Senior Machine Learning Engineer to join the team that designs, builds, and scales ML systems powering personalization and customer engagement at a massive scale.
    You’ll work on production-ready ML pipelines, real-time decision-making systems, and experimentation frameworks that directly impact millions of users worldwide.
     

    What You’ll Do

    • Develop pipelines that transform behavioral, demographic, and contextual data into real-time features.
    • Design APIs and services for low-latency prediction and decision-making.
    • Implement frameworks for A/B testing, bandit algorithms, and model evaluation.
    • Collaborate with product and engineering teams to balance engagement, business value, and compliance.
    • Build monitoring, logging, and retraining workflows to continuously validate and improve models.

    What We Expect

    • 5+ years of experience in applied ML engineering (recommendation systems, personalization, ranking, or ads).
    • Strong knowledge of Python and/or Go, SQL, and ML frameworks such as TensorFlow or PyTorch.
    • Hands-on experience deploying real-time ML systems (low-latency serving, feature stores, event-driven architectures).
    • Familiarity with cloud ML platforms (Vertex AI, SageMaker, or similar).
    • Experience with data warehouses (BigQuery, Snowflake, Redshift).
    • Solid understanding of multi-objective optimization and personalization trade-offs.
    • Ability to thrive in a fast-paced, startup-style environment.

    βž• Nice to Have

    • Experience in martech, adtech, CRM, or large-scale consumer personalization.
    • Exposure to bandit algorithms or reinforcement learning.
    • Previous work on systems serving millions of users.
    • Experience with Google Cloud Platform (GCP).

    We Offer

    • Opportunity to shape large-scale personalization technology.
    • Competitive compensation matching your skills and experience.
    • Professional growth budget for conferences and training.
    • Fully remote and flexible work setup.
    • Collaborative and innovative team culture focused on impact.
    • Online & offline team activities.

    Schedule: 9am–4pm EST
    Format: Full-time / Remote / Worldwide
     

    πŸš€ Join us to build personalization systems that reach millions!
    Send your resume or Djinni profile β€” we’d love to meet you.

    More
  • Β· 332 views Β· 53 applications Β· 6d

    Junior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience
    About Company AIstats is an innovative ecosystem of football products that combines cutting-edge big data analysis and computer vision technology. We are a fast-growing team of 40+ people based in Poland. Recently, we secured investments from Google...

    About Company

    AIstats is an innovative ecosystem of football products that combines cutting-edge big data analysis and computer vision technology.

    We are a fast-growing team of 40+ people based in Poland. Recently, we secured investments from Google executives and leading mobile company founders, which helped us increase our company’s value fourfold in the past year.

    Our goal is to deepen fans' understanding of soccer and create new opportunities for professionals to analyze matches.

     

    Our Products

    • AIstats Mobile App – a consumer platform for football fans and bettors with over 100K monthly active users.
    • AIstats Business Solutions – pro-level analytics tools powered by machine learning and computer vision, built for clubs, scouts, and agents.

       

    Team Culture and Values

    AIstats is the place for those who want to transform the industry! We are building a long-term business to lead football analytics. Our goal is not just to create a product but to build an entire ecosystem. We value ambition, responsibility, and a results-driven mindset. At AIstats, there’s no micromanagement - only trust, autonomy, and freedom to experiment. We work top-down, focus on big goals, and seek people ready to make a real impact.

     

    Job Overview

    We are currently looking for a junior Data Scientist to join our growing team.

     

    Key Responsibilities

    • Generate new ideas that will subsequently be implemented.
    • Conduct research on large and small volumes of data.
    • Data analysis.
    • Searching for non-obvious insights and conducting experiments that allow for a better understanding of data and business processes.
    • Research and develop (but not deploy) mathematical and machine learning models for forecasting and analytics.
    • Collaborate with analysts, developers, and sports experts to create models that enhance prediction accuracy.
    • Test, validate, and improve ML models based on performance results.

       

    Requirements

    Key Skills:

    • Proficiency in Python.
    • Experience with SQL.
    • Experience with tabular ML models.
    • Knowledge of probability theory and mathematical statistics.
    • Experience with deep learning.
    • Interest in football.
    • Love of writing tests.

       

    Main stack:

    • NumPy, SciPy, Pandas, lightgbm, Catboost, Xgboost, Fastapi, Clearml, PyTorch, etc.

       

      If you don't have all of these skills, don't worry. If you're a skilled researcher in Jupyter Notebook, we'd love to have you. If you're not particularly creative but can reliably write useful routines that, for example, automatically correct errors in raw data, we'd also love to have you.

       

    What we offer

    • Fully remote work format;
    • Flexible start of the working day and convenient schedule;
    • Stable competitive salary pegged to the USD;
    • 20 days of paid vacation and 17 additional paid days off (including sick leave, corporate holidays, and national/religious holidays);
    • Friendly communication culture, great product, and transparent processes;
    • Dynamic work environment with a passionate team that loves sports and technology;
    • Opportunities for professional growth and career development;
    • No micromanagement β€” just trust, autonomy, and freedom to experiment.

       

    Join our team and help drive innovation in the football industry!

    More
  • Β· 82 views Β· 16 applications Β· 6d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience
    About Company AIstats is an innovative ecosystem of football products that combines cutting-edge big data analysis and computer vision technology. We are a fast-growing team of 40+ people based in Poland. Recently, we secured investments from Google...

    About Company

    AIstats is an innovative ecosystem of football products that combines cutting-edge big data analysis and computer vision technology.

    We are a fast-growing team of 40+ people based in Poland. Recently, we secured investments from Google executives and leading mobile company founders, which helped us increase our company’s value fourfold in the past year.

    Our goal is to deepen fans' understanding of soccer and create new opportunities for professionals to analyze matches.

     

    Our Products

    • AIstats Mobile App – a consumer platform for football fans and bettors with over 100K monthly active users.
    • AIstats Business Solutions – pro-level analytics tools powered by machine learning and computer vision, built for clubs, scouts, and agents.

       

    Team Culture and Values

    AIstats is the place for those who want to transform the industry! We are building a long-term business to lead football analytics. Our goal is not just to create a product but to build an entire ecosystem. We value ambition, responsibility, and a results-driven mindset. At AIstats, there’s no micromanagement - only trust, autonomy, and freedom to experiment. We work top-down, focus on big goals, and seek people ready to make a real impact.

     

    Job Overview

    We are currently looking for a talented  Data Scientist to join our growing team.

     

    Key Responsibilities

    • Generate new ideas that will subsequently be implemented.
    • Conduct research on large and small volumes of data.
    • Data analysis.
    • Searching for non-obvious insights and conducting experiments that allow for a better understanding of data and business processes.
    • Research and Develop (but NOT deploy) mathematical and machine learning models for forecasting and analytics.
    • Collaborate with analysts, developers, and sports experts to create models that enhance prediction accuracy.
    • Test, validate, and improve ML models based on performance results.

       

    Requirements

    • 2+ years of experience as a Data Scientist.
    • Proficiency in Python.
    • Experience with SQL.
    • Experience with tabular ML models.
    • Knowledge of probability theory and mathematical statistics.
    • Experience with deep learning.
    • Interest in soccer.
    • Love of writing tests.

       

    Main stack:

    • NumPy, SciPy, Pandas, lightgbm, Catboost, Xgboost, Fastapi, Clearml, PyTorch, etc.

       

    What we offer

    • Fully remote work format;
    • Flexible start of the working day and convenient schedule;
    • Stable competitive salary pegged to the USD;
    • 20 days of paid vacation and 17 additional paid days off (including sick leave, corporate holidays, and national/religious holidays);
    • Friendly communication culture, great product, and transparent processes;
    • Dynamic work environment with a passionate team that loves sports and technology;
    • Opportunities for professional growth and career development;
    • No micromanagement β€” just trust, autonomy, and freedom to experiment.

       

    Join our team and help drive innovation in the football industry!

    More
  • Β· 14 views Β· 0 applications Β· 14d

    Data Scientist

    Hybrid Remote Β· Poland Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.


    What you'll do:

    • End-to-End Model Ownership: Design, develop, and ship production-grade machine learning models. You will have the autonomy to own your solutions, considering everything from memory footprint to latency.
    • Research and Innovation: Dive deep into statistical analysis, probability, and machine learning/deep learning models. You'll be expected to critically evaluate existing solutions and, when necessary, develop novel approaches. A foundational understanding of NLP architectures like Transformers is a plus.
    • Engineering: Build and maintain containerized models and develop comprehensive model monitoring solutions to track metrics like variable drift.
    • Streamlined Deployment: Create and manage CI/CD pipelines for our machine learning models and contribute to the design and development of APIs.
    • Data Expertise: Utilize your SQL proficiency to perform in-depth data analysis and feature engineering.

     

    Requirements:

    • Proficient Software Engineering Skills: A strong track record of designing, developing, and deploying ML models in a production environment. Experience with deploying models on Scoring Platform is highly desirable.
    • Expertise: High proficiency in Python.
    • Strong Quantitative Foundation: A deep understanding of statistical analysis, probability, and a wide range of machine learning and deep learning models.
    • MLOps Experience: Hands-on experience with containerization (Docker), CI/CD pipelines, and model monitoring.
    • API & Containers: Experience in API schema design and development and in building containerized applications (e.g., using Docker)
    • Excellent communication and stakeholder management skills.
    • Experience working with consumer or partner Fraud Detection/Credit Underwriting would be a plus.

      Ideally, you have:
    • An academic background in Physics, ML, Data Science, Mathematics, or Biology
    • Prior experience as a Senior Data Scientist, ideally within a regulated environment or fintech context

     

    More
  • Β· 13 views Β· 0 applications Β· 13d

    Senior Data Scientist / ML Engineer (Semantic Search)

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...

    Project Description:

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

     

    Responsibilities:

    We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
    - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
    - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
    - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
    - Integrate ML models and search capabilities into production systems.
    - Evaluate, fine-tune, and monitor search performance metrics.
    - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
    - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

     

    Mandatory Skills Description:

    - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
    - Strong programming experience in both Java and Python (production-level code, not just prototyping).
    - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
    - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
    - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
    - Experience deploying and maintaining ML/search systems in production.
    - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

     

    Nice-to-Have Skills Description:

    - Experience of work in distributed teams, with US customers
    - Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
    - Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
    - Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
    - Experience with MLOps and model monitoring tools.
    - Contributions to open-source search or ML projects.

    More
  • Β· 17 views Β· 1 application Β· 13d

    Senior Data Scientist / ML Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several Product Teams focused on...

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

    • Responsibilities:

      We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
      - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
      - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
      - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
      - Integrate ML models and search capabilities into production systems.
      - Evaluate, fine-tune, and monitor search performance metrics.
      - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
      - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

    • Mandatory Skills Description:

      - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
      - Strong programming experience in both Java and Python (production-level code, not just prototyping).
      - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
      - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
      - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
      - Experience deploying and maintaining ML/search systems in production.
      - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

    More
  • Β· 14 views Β· 2 applications Β· 12d

    Data Engineer

    Full Remote Β· EU Β· 4 years of experience Β· B2 - Upper Intermediate
    We are seeking a detail-oriented and technically proficient Data Engineer to join our team. This role focuses on structuring, modeling, and analyzing client data, primarily for use in graph-based environments such as Neo4j. The ideal candidate will have...

    We are seeking a detail-oriented and technically proficient Data Engineer to join our team. This role focuses on structuring, modeling, and analyzing client data, primarily for use in graph-based environments such as Neo4j.

    The ideal candidate will have strong data modeling skills, proficiency in Python and database technologies, and the ability to work closely with clients to deliver actionable insights.

    Key Responsibilities

    Data Structuring & Contribution
    – Analyze, understand data structures, and interpret data files (e.g., CSV, PDF, Excel) received from clients.
    – Organize and structure raw data into clean, usable formats for downstream modeling and analysis based on domain-driven design (DDD) or data mesh principles.
    – Learn and adopt the internal data platform’s workflows and tooling.
    – Contribute to platform improvements and maintain alignment with evolving technical standards.
    – Work on data governance, metadata management, and documentation of models.
    – Work closely with the team lead and development team, primarily on customer-facing, consulting-style projects and platform improvement.

    Data Modeling
    – Design and prepare data schemas and models, primarily for use with the Neo4j graph database.
    – Map client data to the defined Neo4j schema using Pydantic models.
    – Define and manage schema evolution, normalization/denormalization, data integrity constraints, and indexing strategies.
    – Strong general data modeling skills are essential (Neo4j/graph modeling experience is a plus but not required).

    Data Analysis & Visualization
    – Perform exploratory data analysis using Neo4j Bloom or similar BI tools.
    – Present insights and visualizations to internal teams and clients to support understanding and data-driven decision-making.

    Client Interaction & Communication
    – Join and participate in customer-facing meetings to gather data requirements and present findings.
    – Communicate clearly with clients to discuss datasets, schemas, and analysis results.
    – Participate in standups and sync meetings, typically held between 8 am – 1 pm PST.

    Required Experience:

    • Experience with knowledge graphs in both production and non-production projects.
    • Hands-on experience with Neo4j is preferable. Alternatively, experience with Amazon Neptune, OrientDB, JanusGraph, Azure Cosmos DB, or TigerGraph is also valuable.
    • Candidates who have experimented with Neo4j and are willing to ramp up in graph technologies quickly are welcome.
    • Proficient in Python, including libraries such as pandas, polars, pydantic, numpy, typing, and Jupyter Notebook.
    • Proficient in SQL and Cypher queries.
    • Strong background in conceptual, logical, and physical data modeling.
    • Experience designing and implementing relational and non-relational (NoSQL/graph) schemas.
    • Familiarity with modern AI tools, including LLMs, embeddings, agentic systems, and tooling for unstructured data processing and intelligent automation.
    More
  • Β· 46 views Β· 0 applications Β· 12d

    Data Engineer/Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    We’re looking for a highly skilled Data Expert! Product | Remote About the role ​​We’re looking for a data engineer/scientist who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth,...

    πŸ”₯ We’re looking for a highly skilled Data Expert!πŸ”₯

     

    Product | Remote

     

    About the role 

    ​​We’re looking for a data engineer/scientist who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions.

     

    This role combines data governance, analytics, and development. You’ll build reliable data pipelines, improve observability, and uncover meaningful patterns that guide how we grow and evolve.

     

    You’ll work closely with product and technical teams to analyze user behavior, run experiments, build predictive models, and turn complex findings into actionable recommendations. You’ll also design and support systems for collecting, transforming, and analyzing data across our stack.

     

    What you’ll do 

    • Analyze product and user behavior to uncover trends, bottlenecks, and opportunities.
    • Design and evaluate experiments (A/B tests) to guide product and growth decisions.
    • Build and maintain data pipelines, ETL processes, and dashboards for analytics and reporting.
    • Develop and validate statistical and machine learning models for prediction, segmentation, and forecasting.
    • Design and optimize data models for new features and analytics (e.g., using dbt).
    • Work with event-driven architectures and standards like AsyncAPI and CloudEvents.
    • Collaborate with engineers to improve data quality, consistency, and governance across systems.
    • Use observability and tracing tools (e.g., OpenTelemetry) to monitor and improve performance.
    • Create visualizations and reports that clearly communicate results to technical and non-technical audiences.
    • Support existing frontend and backend systems related to analytics and data processing.
    • Champion experimentation, measurement, and data-driven decision-making across teams.

     

    You're a great fit if you have 

    • 5+ years of software engineering experience, with 3+ years focused on data science or analytics.
    • Strong SQL skills and experience with data modeling (dbt preferred).
    • Solid understanding of statistics, hypothesis testing, and experimental design.
    • Proven experience in data governance, analytics, and backend systems.
    • Familiarity with columnar databases or analytics engines (ClickHouse, Postgres, etc.).
    • Experience with modern data visualization tools.
    • Strong analytical mindset, attention to detail, and clear communication.
    • Passionate about clarity, simplicity, and quality in both data and code.
    • English proficiency: Upper-Intermediate or higher.

     

    Nice to have

    • Understanding of product analytics and behavioral data.
    • Experience with causal inference or time-series modeling.
    • Strong proficiency with Node.js, React, JavaScript, and TypeScript.
    • Experience with frontend or backend performance optimization.
    • Familiarity with Git-based workflows and CI/CD for data pipelines.
       

    How you’ll know you’re doing a great job

    • Teams make better product decisions, faster, because of your insights.
    • Data pipelines are trusted, observable, and performant.
    • Experiments drive measurable product and business outcomes.
    • Metrics and dashboards are used across teams β€” not just built once.
    • You’re the go-to person for clarity when questions arise about β€œwhat the data says.”

     

    About Redocly

    Redocly builds tools that accelerate API ubiquity. Our platform helps teams create world-class developer experiences β€” from API documentation and catalogs to internal developer hubs and public showcases. We're a globally distributed team that values clarity, autonomy, and craftsmanship. You'll work alongside people who love developer experience, storytelling, and building tools that make technical work simpler and more joyful.

    Headquarter – Austin, Texas, US. There is also an office in Lviv, Ukraine.

     

    Redocly is trusted by leading tech, fintech, telecom, and enterprise teams to power API documentation and developer portals. Redocly’s clients range from startups to Fortune 500 enterprises.

    https://redocly.com/

     

    Working with Redocly

    • Team: 4-6 people (middle-seniors)
    • Team’s location: Ukraine&Europe
    • There are functional, product, and platform teams and each has its own ownership, and line structure, and teams themselves decide when to have weekly meetings.
    • Cross-functional teams are formed for each two-month cycle, giving team members the opportunity to work across all parts of the product.
    • Methodology: Shape Up

     

    Perks

    • Competitive salary based on your expertise (approximately $6,000 - $6,500 per month)
    • Full remote, though you’re welcome to come to the office occasionally if you wish.
    • Cooperation on a B2B basis with a US-based company (for EU citizens) or under a gig contract (for Ukraine).
    • After a year of working with the company, you can buy a certain number of company’s shares
    • Around 30 days of vacation (unlimited,  but let’s keep it reasonable)
    • 10 working days of sick leave per year
    • Public holidays according to the standards
    • No trackers and screen recorders
    • Working hours – EU/UA timezone. Working day – 8 hours. Mostly they start working from 10-11 am
    • Equipment provided – MacBooks (M1 – M4)
    • Regular performance reviews

     

    Hiring Stages

    • Prescreening (30-45 min)
    • HR Call (45 min)
    • Initial Interview (30 min)
    • Trial Day (paid)
    • Offer

     

    If you are an experienced Data Scientist, and you want to work on impactful data-driven projects, we’d love to hear from you! 


    Apply now to join our team!

     

    More
  • Β· 25 views Β· 2 applications Β· 11d

    Senior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    MINT is the world leader in Advertising Resource Management (ARM) software and the digital transformation partner that enables the agentic marketing model of the future. With a global presence in Europe and the Americas, MINT boasts a diverse team of...

    MINT is the world leader in Advertising Resource Management (ARM) software and the digital transformation partner that enables the agentic marketing model of the future.  
     

    With a global presence in Europe and the Americas, MINT boasts a diverse team of over 300 professionals that drive innovation and operational excellence worldwide, enabling organizations to achieve long-term advertising success and sustained financial growth.  

     

    Thanks to our open enterprise IT solution, we empower global brands and agencies to cut costs, improve campaign outcomes, and maximize profitability through our three-layered platform based on digitalized workflows, advanced data and taxonomies, and a powerful agentic layer. 

     

    The Role 

    We’re looking for a Senior Data Scientist to take ownership of a cutting-edge agentic AI suite for full cycle media operations, from media planning, activation to financial reconciliation.  

    Your job will focus on designing, developing and deploying end-to-end data initiatives - from data ingestion to data presentation, from proof of concept to production. In this role, you'll have the opportunity to shape product direction, lead technical development, and mentor others. 

    You’ll work remotely with talented peers across 10+ countries and three continents. We value autonomy, proactivity, solution focused mindset and collaboration in a flexible and mostly asynchronous environment. 

    This is a consultancy engagement, not an employment contract. The selected consultant will work independently, bringing their specialized knowledge and experience to our projects. 

     

    Responsibilities  

    • Lead the design, development, and deployment of data science solutions, from exploration to production 
    • Work closely with product, engineering and solution consultants’ teams to translate business challenges into scalable data products 
    • Develop, evaluate, and deploy machine learning models and data pipelines for real-world applications 
    • Explore and integrate new technologies such as LLMs, MCP/A2 and AI agents to enhance our products 
    • Explore building a standalone agentic workforce for media operations  
    • Ensure data quality and integrity across multiple sources and processing stages 
    • Communicate findings and insights clearly to technical and non-technical stakeholders 
    • Assist in identifying and implementing SOTA and emerging AI/ML techniques 
    • Mentor and guide junior data scientists, fostering best practices and continuous learning 
    • Contribute to team-wide technical decisions, planning, and long-term strategy 

     

    This role will be a great fit if you have: 

    • 5+ years of experience in Data Science, with a solid foundation in statistics and machine learning, and GenAI
    • Advanced proficiency in Python (and its core data/ML libraries) and SQL
    • Experience in Designing and Developing AI-agents from scratch with frameworks such as Langchain and/or Langraph, using RAG and similar techniques
    • Proven track record of delivering production-grade data and machine learning and Gen-AI solutions
    • Hands-on experience with ETL processes, large datasets (1M+ rows), and data cleaning and validation
    • Familiarity with AWS or another major cloud platform
    • Experience working with time-series forecasting, recommender systems, NLP, or optimization problems
    • Strong communication and presentation skills, with the ability to bridge technical and non-technical perspectives
    • Self-driven and comfortable leading projects independently in a remote setup

     

    We would also appreciate: 

    • Familiarity with Docker or Kubernetes, CI/CD pipelines, monitoring tools, and Git 
    • Previous experience with setting up MCP/A2 integrations is desired but not required 
    • Previous experience mentoring or leading other data scientists or engineers 
    • Background or interest in the advertising or marketing technology space 

     

     

    Interview Process 

    Candidates can expect the following steps in our interview process: 

    • HR Video Interview (30 minutes): A virtual interview with HR to provide an overview of the company and the role, as well as to gain insight into your experience and background.   
    • Technical Interview with Head of Engineering (1 hour): A virtual interview with the Head of Engineering focused on your technical expertise, past experience, and problem-solving approach.  
    • Interview with the CPTO (45 minutes): Ideally conducted in person, this session will serve as a high-level discussion to align on technical vision and product approach. 
    • Offer: after completing the interview process, successful candidates will receive an offer. 

     

    MINT is committed to a diverse and inclusive workplace. MINT AI is an equal opportunity employer and does not discriminate on the basis of age, race, religion, color, national origin, gender, gender identity, gender expression, sexual orientation, immigration status, medical condition, protected veteran status, disability, genetic information, political views or activity, or other legally protected status.  

    More
  • Β· 11 views Β· 0 applications Β· 11d

    Data Scientist with Java expertise

    Full Remote Β· Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    Project description The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...

    Project description

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

    Responsibilities

    We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:

    Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.

    Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.

    Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.

    Integrate ML models and search capabilities into production systems.

    Evaluate, fine-tune, and monitor search performance metrics.

    Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.

    Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

    Skills

    Must have

    5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.

    Strong programming experience in both Java and Python (production-level code, not just prototyping).

    Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).

    Experience with Vector Databases, Embeddings, and Semantic Search techniques.

    Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).

    Experience deploying and maintaining ML/search systems in production.

    Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

    Nice to have

    Experience of work in distributed teams, with US customers

    Experience with LLMs, RAG pipelines, and vector retrieval frameworks.

    Knowledge of Spring Boot, FastAPI, or similar backend frameworks.

    Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).

    Experience with MLOps and model monitoring tools.

    Contributions to open-source search or ML projects.

    Languages

    English: B2 Upper Intermediate

    More
  • Β· 13 views Β· 0 applications Β· 11d

    Senior Data Scientist / ML Engineer (Semantic Search)

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...
    • Project Description:

      The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
      Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
      Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

     

     

    • Responsibilities:

      We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
      - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
      - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
      - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
      - Integrate ML models and search capabilities into production systems.
      - Evaluate, fine-tune, and monitor search performance metrics.
      - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
      - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

     

     

    • Mandatory Skills Description:

      - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
      - Strong programming experience in both Java and Python (production-level code, not just prototyping).
      - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
      - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
      - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
      - Experience deploying and maintaining ML/search systems in production.
      - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

     

     

    • Nice-to-Have Skills Description:

      - Experience of work in distributed teams, with US customers
      - Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
      - Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
      - Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
      - Experience with MLOps and model monitoring tools.
      - Contributions to open-source search or ML projects.

     

     

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 14 views Β· 0 applications Β· 11d

    Senior Data Scientist / ML Engineer (Semantic Search)

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Project Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...
    • Project Description:

      The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
      Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
      Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

    • Responsibilities:

      We are looking for an experienced Data Engineer with Machine Learning expertise and good understanding of search engines, to work on the following:
      - Design, develop, and optimize semantic and vector-based search solutions leveraging Lucene/Solr and modern embeddings.
      - Apply machine learning, deep learning, and natural language processing techniques to improve search relevance and ranking.
      - Develop scalable data pipelines and APIs for indexing, retrieval, and model inference.
      - Integrate ML models and search capabilities into production systems.
      - Evaluate, fine-tune, and monitor search performance metrics.
      - Collaborate with software engineers, data engineers, and product teams to translate business needs into technical implementations.
      - Stay current with advancements in search technologies, LLMs, and semantic retrieval frameworks.

    • Mandatory Skills Description:

      - 5+ years of experience in Data Science or Machine Learning Engineering, with a focus on Information Retrieval or Semantic Search.
      - Strong programming experience in both Java and Python (production-level code, not just prototyping).
      - Deep knowledge of Lucene, Apache Solr, or Elasticsearch (indexing, query tuning, analyzers, scoring models).
      - Experience with Vector Databases, Embeddings, and Semantic Search techniques.
      - Strong understanding of NLP techniques (tokenization, embeddings, transformers, etc.).
      - Experience deploying and maintaining ML/search systems in production.
      - Solid understanding of software engineering best practices (CI/CD, testing, version control, code review).

    • Nice-to-Have Skills Description:

      - Experience of work in distributed teams, with US customers
      - Experience with LLMs, RAG pipelines, and vector retrieval frameworks.
      - Knowledge of Spring Boot, FastAPI, or similar backend frameworks.
      - Familiarity with Kubernetes, Docker, and cloud platforms (AWS/Azure/GCP).
      - Experience with MLOps and model monitoring tools.
      - Contributions to open-source search or ML projects.

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 17 views Β· 3 applications Β· 11d

    Senior Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    For our client, we are looking for a Senior Machine Learning Engineer to join a long-term, full-time, remote collaboration. You will be joining the team that designs, builds, and scales machine learning systems powering personalisation and customer...

    For our client, we are looking for a Senior Machine Learning Engineer
    to join a long-term, full-time, remote collaboration.

    You will be joining the team that designs, builds, and scales machine learning systems powering personalisation and customer engagement at a large scale. This is an opportunity to work on production-ready ML pipelines, real-time decision-making systems, and experimentation frameworks that directly impact millions of users.


    Your responsibilities will include:

    • Developing pipelines that transform behavioral, demographic, and contextual data into real-time features.
    • Designing APIs and services for low-latency prediction and decision-making.
    • Implementing frameworks for A/B testing, exploration/exploitation strategies, and model evaluation.
    • Working closely with product and engineering teams to balance engagement, business value, and compliance.
    • Establishing monitoring, logging, and retraining workflows to continuously validate and improve models.

     

    What we expect from you:

    • 5+ years of applied ML engineering experience (recommendation systems, personalization, ranking, or ads).
    • Strong background in Python and/or Go, SQL, and ML frameworks such as TensorFlow or PyTorch.
    • Experience deploying real-time ML systems (low-latency serving, feature stores, event-driven architectures).
    • Familiarity with cloud ML platforms (Vertex AI, SageMaker, or similar).
    • Experience with data warehouses (BigQuery, Snowflake, Redshift).
    • Understanding of multi-objective optimisation and trade-offs in personalisation.
    • Ability to thrive in a fast-paced, startup-style environment

    Will be a plus:

    • Experience in martech, adtech, CRM, or large-scale consumer personalisation.
    • Exposure to bandit algorithms or reinforcement learning.
    • Prior work on systems serving millions of users at scale.
    • Experience with Google Cloud Platform (GCP).

    Soft Skills:

    • Fluent English and strong communication skills.
    • Proactive and positive attitude.
    • Ability to work 9 am- 4 pm EST.

     

    We offer:

    • Opportunities to shape large-scale personalisation technology.
    • Salary range: 6000 -7000 USD
    • Competitive compensation package that matches your skills and experience.
    • Professional growth, conferences, and skill development budget.
    • Flexible remote work with support for your productivity.
    • Collaborative and innovative environment where impact is valued over years of experience.
    • Online&Offline activities

     

    More
  • Β· 46 views Β· 4 applications Β· 10d

    Data Annotator to $600

    Full Remote Β· Ukraine Β· 1 year of experience Β· A2 - Elementary
    About us: We are Data Science UA, a fast-growing IT service company. We are proud of developing the Data Science community in Ukraine for more than 6 years. Data Science UA unites all researchers, engineers, and developers around Data Science and related...

    About us:

    We are Data Science UA, a fast-growing IT service company. We are proud of developing the Data Science community in Ukraine for more than 6 years. Data Science UA unites all researchers, engineers, and developers around Data Science and related areas. We conduct events on machine learning, computer vision, intelligence, information science, and the use of artificial intelligence for business in various fields.

     

    About product:

    This is an intelligent video analytics platform designed to automate the process of learning and generating contextual insights from unstructured data inside video footage. The company’s platform uses artificial intelligence built on proprietary machine comprehension services that can contextually understand the content of users’ data, enabling clients to provide contextual advertisements and product recommendations to provide a personalized experience for users.

     

    They have developed and deployed a computer vision system for industrial procedure monitoring which involves multiple cameras, on-prem servers with multiple GPUs, and a cloud component on AWS cloud. The system is installed in 3 countries, and we have new customers all over the world.

     

    About the role:

    We are looking for an experienced Junior (or Junior/Middle) Data Annotator who will work closely with engineers in the tasks of creating datasets for training neural networks.

     

    Requirements:

    β€” Minimum of a Bachelor's Degree in Computer Science or related field.

    β€” Ability to remain attentive during long, monotonous work.

    β€” Ability to perform tasks according to requirements.

    β€” English - Intermediate and above.

    β€” High level of communication skills.

    β€” Prior annotation experience.

    β€” Familiarity with data annotation tools, methods and processes.

    β€” Familiarity with Confluence, Jira and Google Services (Docs, Sheets, Slides, Drive).

     

    Responsibilities:

    β€” Image annotation (object detection annotation, segmentation annotation), with high accuracy according to the rules of the task, using a special annotation tool.

    β€” Sorting images by specified conditions.

    β€” Achieving required KPIs of performance and quality.

     

    We offer:

    β€” Possibility to make an impact on product from scratch;

    β€” Competitive salary and perks;

    β€” Working with cutting-edge technologies;

    β€” Friendly team and nice environment;

    β€” A positive atmosphere all over the company.

     

    You will work closely with the founding team and greatly impact the direction of the products and the company. If you’re interested in joining our new R&D team, please let us know about yourself.

    More
Log In or Sign Up to see all posted jobs