Jobs Data Science

89
  • Β· 43 views Β· 2 applications Β· 11d

    Senior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B1 Ukrainian Product πŸ‡ΊπŸ‡¦
    Hello! We are E-Com, a team of Foodtech and Ukrainian product lovers. And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming. What we are...

    Hello!

     

    We are E-Com, a team of Foodtech and Ukrainian product lovers.

    And we also break stereotypes that retail is only about tomatoes. Believe me, the technical part of our projects provides a whole field for creativity and brainstorming.

     

    What we are currently working on:

    • we are upgrading the existing delivery of a wide range of products from Silpo stores;
    • we are developing super-fast delivery of products and dishes under the new LOKO brand.

     

    We are developing a next-generation Decision Support Platform that connects demand  planning, operational orchestration, and in-store execution optimization into one unified Analytics and  Machine Learning Ecosystem. 

     

    The project focuses on three major streams: 

    β€’  Demand & Forecasting Intelligence: building short-term demand forecasting models, generating  granular demand signals for operational planning, identifying anomalies, and supporting commercial  decision logic across virtual warehouse clusters. 

    β€’  Operational Orchestration & Task Optimization: designing predictive models for workload  estimation, task duration (ETA), and prioritization. Developing algorithms that automatically map  operational needs into structured tasks and optimize their sequencing and allocation across teams. 

    β€’  In-Store Execution & Routing Optimization: developing models that optimize picker movement,  predict in-store congestion, and recommend optimal routes and execution flows. Integrating store  layout geometry, product characteristics, and operational constraints to enhance dark-store  efficiency. 

     

    You will join a cross-functional team to design and implement data-driven decision module that directly  influence commercial and operational decisions. 

     

    Responsibilities:

    β€’  develop and maintain ML models for forecasting short-term demand signals and detecting anomalies  across virtual warehouse clusters;

    β€’  build predictive models to estimate task workload, execution times (ETA), and expected operational  performance;

    β€’  design algorithms to optimize task distribution, sequencing, and prioritization across operational  teams;

    β€’  develop routing and path-optimization models to improve picker movement efficiency within dark  stores; 

    β€’  construct data-driven decision modules that integrate commercial rules, operational constraints, and  geometric layouts;

    β€’  translate business requirements into ML-supported decision flows and automate key parts of  operational logic; 

    β€’  build SQL pipelines and data transformations for commercial, operations, and logistics datasets;

    β€’  work closely with supply chain, dark store operations, category management, and IT to deliver  measurable improvements;

    β€’  conduct A/B testing, validate model impact, and ensure high-quality model monitoring. 

     

    Requirements:

    β€’  bachelor’s Degree in Mathematics / Quantitative Economics / Econometrics / Statistics / Computer  Sciences / Finance; 

    β€’  at least 2 years working experience on Data Science; 

    β€’  strong mathematical background in Linear algebra, Probability, Statistics & Optimization Techniques; 

    β€’  proven experience with SQL (Window functions, CTEs, joins) and Python;

    β€’  expertise in Machine Learning, Time Series Analysis and application of Statistical Concepts  (Hypothesis testing, A/B tests, PCA); 

    β€’  ability to work independently and decompose complex problems. 

     

    Preferred:

    β€’  experience with Airflow, Docker, or Kubernetes for Data Orchestration; 

    β€’  practical experience with Amazon SageMaker: training, deploying, and monitoring ML models in a  production environment; 

    β€’  knowledge of Reporting and Business Intelligence Software (Power BI, Tableau, Looker); 

    β€’  ability to design and deliver packaged analytical/ML solutions. 

     

    What we offer

    • competitive salary;
    • opportunity to work on flagship projects impacting millions of users;
    • flexible remote or office-based work (with backup power and reliable connectivity at SilverBreeze Business Center);
    • flexible working schedule;
    • medical and Life insurance packages;
    • support for GIG contract or private entrepreneurship arrangements;
    • discounts at Fozzy Group stores and restaurants;
    • psychological support services;
    • Caring corporate culture;
    • a team where you can implement your ideas, experiment, and feel like you are among friends.
    More
  • Β· 30 views Β· 11 applications Β· 12d

    Principal AI Platform Engineer (Data Science, Machine Learning)

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    AIMPROSOFT β€” Principal AI Platform Engineer Opportunity! Aimprosoft is looking for a senior-level AI engineer who will design and lead the architecture of an enterprise-grade AI platform, evolving from an internal RAG-based assistant into a scalable,...

    πŸš€AIMPROSOFT β€” Principal AI Platform Engineer Opportunity!

     

    Aimprosoft is looking for a senior-level AI engineer who will design and lead the architecture of an enterprise-grade AI platform, evolving from an internal RAG-based assistant into a scalable, market-ready knowledge and automation product.

     

    🎯About the role:

    This is a principal-level role with high autonomy and architectural ownership. You will define the technical direction, key design principles, and long-term platform strategy for AI-powered knowledge discovery, semantic search, and business automation across multiple domains

    πŸ”‘Core Responsibilities:

    • Define the end-to-end architecture of an AI knowledge and automation platform
    • Make independent decisions and lead the technical direction
    • Design scalable retrieval systems beyond naive RAG implementations (semantic, document-level, process-level)
    • Establish reasoning and orchestration patterns for multi-step business use cases
    • Balance product vision, technical feasibility, and long-term maintainability
       

    πŸ”₯What We Need From You:

    • Proven experience designing production AI or search platforms from scratch
    • Experience moving from single-use assistant to multi-domain, multi-use platforms
    • API-first and platform-oriented system design
    • Strong understanding of trade-offs in scale, latency, relevance, and cost
    • Ability to design systems that remain explainable, debuggable, and governed
    • Deep expertise in RAG, semantic search, and knowledge retrieval systems
    • Vector databases, hybrid search, semantic indexing, and relevance ranking
    • Architecture for document-, entity-, and process-level knowledge modeling
    • Advanced LLM integration and reasoning design (structured, controlled, role-based)
    • Proven experience with agent orchestration as a controlled architectural pattern including memory, tools, learning, task decomposition, etc
    • Orchestration of AI workflows for automation use cases
    • Experience with AI observability tools
       

    πŸ“Œ This Role Is Designed For Engineers Who:

    • Enjoy thinking in platforms and long-term systems, not one-off features
    • Are comfortable defining direction in ambiguous problem spaces where assumptions must be made explicit
    • Care deeply about architectural clarity, system boundaries, and ownership
    • Naturally reason about failure modes, trade-offs, and system limits
    • Balance ambitious technical vision with practical constraints
    • See governance, observability, and predictability as core parts of good engineering β€” not afterthoughts
       

    πŸ’ΌWhat We Offer:

    • A competitive salary that appreciates your skills and experience
    • Cozy atmosphere and modern approaches. We have neither bureaucracy nor strict management nor β€œworking under pressure” conditions
    • Opportunity to implement your ideas, tools, and approaches. We are open to changes and suggestions aimed at improvement
    More
  • Β· 74 views Β· 2 applications Β· 13d

    Computer Vision/Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the role:
    We are looking for a Computer Vision / Machine Learning Engineer to develop offline CV models for industrial visual inspection.


    Your main task will be to design, train, and evaluate models on inspection data in order to:

     

    • Improve discrimination between good vs. defect samples
    • Provide insights into key defect categories (e.g., terminal electrode irregularities, surface chipping)
    • Significantly reduce false-positive rates, optimizing for either precision, or recall
    • Prepare the solution for future deployment, scaling, and maintenance
    •  

    Key Responsibilities:
    Data Analysis & Preparation
    - Conduct dataset audits, including class balance checks and sample quality reviews
    - Identify low-frequency defect classes and outliers
    - Design and implement augmentation strategies for rare defects and edge cases
    Model Development & Evaluation
    - Train deep-learning models on inspection images for defect detection
    - Use modern computer vision / deep learning frameworks (e.g., PyTorch, TensorFlow)
    - Evaluate models using confusion matrices, ROC curves, precision–recall curves, F1 scores and other relevant metrics
    - Analyze false positives/false negatives and propose thresholds or model improvements
    Reporting & Communication
    - Prepare clear offline performance reports and model evaluation summaries
    - Explain classifier decisions, limitations, and reliability in simple, non-technical language when needed
    - Provide recommendations for scalable deployment in later phases (e.g., edge / on-prem inference, integration patterns)

    Candidate Requirements:
    Must-have:
    - 1-2 years of hands-on experience with computer vision and deep learning (classification, detection, or segmentation)
    - Strong proficiency in Python and at least one major DL framework (PyTorch or TensorFlow/Keras)
    - Solid understanding of:

    • Image preprocessing and augmentation techniques
    • Classification metrics: accuracy, precision, recall, F1, confusion matrix, ROC, PR curves
    • Handling imbalanced datasets and low-frequency classes

    - Experience training and evaluating offline models on real production or near-production datasets
    - Ability to structure and document experiments, compare baselines, and justify design decisions
    - Strong analytical and problem-solving skills; attention to detail in data quality and labelling
    - Good communication skills in English (written and spoken) to interact with internal and client stakeholders

    Nice-to-have:
    - Experience with industrial / manufacturing computer vision (AOI, quality inspection, defect detection, etc.)
    - Familiarity with ML Ops/deployment concepts (ONNX, TensorRT, Docker, REST APIs, edge devices)
    - Experience working with time-critical or high-throughput inspection systems
    - Background in electronics, semiconductors, or similar domains is an advantage
    - Experience preparing client-facing reports and presenting technical results to non-ML audiences

    We offer:
    - Free English classes with a native speaker and external courses compensation;
    - PE support by professional accountants;
    - 40 days of PTO;
    - Medical insurance;
    - Team-building events, conferences, meetups, and other activities;
    - There are many other benefits you’ll find out at the interview.

    More
  • Β· 30 views Β· 4 applications Β· 16d

    Game Mathematician

    Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - None
    Hello, future colleague! At DreamPlay, we create pixel-perfect slot games powered by our own engine. We are reinventing the gambling experience by delivering unique, high-quality games to the market. We are a team of professionals who value quality,...

    Hello, future colleague!
     

    At DreamPlay, we create pixel-perfect slot games powered by our own engine. We are reinventing the gambling experience by delivering unique, high-quality games to the market.
    We are a team of professionals who value quality, ownership, transparency, and collaboration. We believe in a results-driven environment where everyone has the space to grow, contribute, and make an impact.

    We’re currently looking for a Game Mathematician to join our team and help shape the core mechanics behind our games.

     

    Requirements:

    • Experience in developing mathematics for casino slots.
    • Strong analytical and problem-solving skills with a high level of attention to detail
    • Solid background in Combinatorics, Probability Theory, and Statistics
    • Advanced proficiency in MS Excel, including building and adapting large, complex spreadsheets
    • Strong critical thinking skills and the ability to manage multiple tasks simultaneously
       

    Key Responsibilities:

    • Test and validate mathematical outcomes to ensure accuracy and quality (using MS Excel, programming, and proprietary tools).
    • Design and maintain high-quality mathematical documentation, including math models, game logic, PAR sheets, and customer-facing materials.
    • Analyze and balance game mechanics to ensure fairness, performance, and regulatory compliance.
    • Run simulations and optimize mathematical algorithms to improve game performance and player engagement.
    • Maintain clear technical documentation to support collaboration across teams and meet compliance requirements.
    • Stay up to date with industry trends, emerging technologies, and competitor practices to continuously improve game design strategies.
       

    We Offer:

    • Opportunity to work remotely or from our Kyiv office.
    • Flexible working hours β€” you choose when to start your day.
    • Modern Mac equipment.
    • Career growth within a team of iGaming professionals.
    • Supportive, transparent team culture with minimal bureaucracy.
    • Time-off policy that fits real life (paid vacation, sick leave, public holiday).
    • Benefits for employees.
    More
  • Β· 36 views Β· 3 applications Β· 17d

    Machine Learning Engineer (Real-Time Inference Systems)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - C2
    Our client is a leading mobile marketing and audience platform empowering the global app ecosystem with advanced solutions in mobile marketing, audience building, and monetization. With direct integrations into 500,000+ mobile apps worldwide, they process...

    Our client is a leading mobile marketing and audience platform empowering the global app ecosystem with advanced solutions in mobile marketing, audience building, and monetization.

    With direct integrations into 500,000+ mobile apps worldwide, they process massive volumes of first-party data to deliver intelligent, real-time, and scalable advertising decisions. Their platform operates at extreme scale, serving billions of requests per day under strict latency and performance constraints.

    About the Role

    We are looking for a highly skilled, independent, and driven Machine Learning Engineer to own and lead the design and development of our next-generation real-time inference services.

    This is a rare opportunity to take ownership of mission-critical systems on a massive scale, working at the intersection of machine learning, large-scale backend engineering, and business logic.

    You will build robust, low-latency services that seamlessly combine predictive models with dynamic decision logic β€” while meeting extreme requirements for performance, reliability, and scalability.

    Responsibilities

    • Own and lead the design and development of low-latency inference services handling billions of requests per day
    • Build and scale real-time decision-making engines, integrating ML models with business logic under strict SLAs
    • Collaborate closely with Data Science teams to deploy models reliably into production
    • Design and operate systems for model versioning, shadowing, and A/B testing in runtime
    • Ensure high availability, scalability, and observability of production services
    • Continuously optimize latency, throughput, and cost efficiency
    • Work independently while collaborating with stakeholders across Algo, Infra, Product, Engineering, Business Analytics, and Business teams

    Requirements

    • B.Sc. or M.Sc. in Computer Science, Software Engineering, or a related technical field
    • 5+ years of experience building high-performance backend or ML inference systems
    • Strong expertise in Python
    • Hands-on experience with low-latency APIs and real-time serving frameworks
      (FastAPI, Triton Inference Server, TorchServe, BentoML)
    • Experience designing scalable service architectures
    • Strong knowledge of async processing, message queues, and streaming systems
      (Kafka, Pub/Sub, SQS, RabbitMQ, Kinesis)
    • Solid understanding of model deployment, online/offline feature parity, and real-time monitoring
    • Experience with cloud platforms (AWS, GCP, or OCI)
    • Strong hands-on experience with Kubernetes
    • Experience with in-memory / NoSQL databases
      (Aerospike, Redis, Bigtable)
    • Familiarity with observability stacks: Prometheus, Grafana, OpenTelemetry
    • Strong sense of ownership and ability to drive solutions end-to-end
    • Passion for performance, clean architecture, and impactful systems
    More
  • Β· 34 views Β· 4 applications Β· 17d

    Computer Vision Engineer (slam, vio)

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 1 year of experience Β· English - None MilTech πŸͺ–
    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms. The ideal candidate will have experience with SLAM, Visual-Inertial Odometry (VIO), and sensor...

    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms.

    The ideal candidate will have experience with SLAM, Visual-Inertial Odometry (VIO), and sensor fusion.

    Required Qualifications:

    • 1+ years of hands-on experience with classical computer vision
    • Knowledge of popular computer vision networks and components 
    • Understanding of geometrical computer vision principles
    • Hands-on experience in implementing low-level CV algorithms
    • Practical experience with SLAM and/or Visual-Inertial Odometry (VIO)
    • Proficiency in C++
    • Experience with Linux
    • Ability to quickly navigate through recent research and trends in computer vision.
    • Relevant work experience or education in STEM field

    Nice to Have:

    • Experience with Python
    • Familiarity with neural networks and common CV frameworks/libraries (OpenCV, NumPy, PyTorch, ONNX, Eigen, etc.)
    • Experience with sensor fusion.
    More
  • Β· 17 views Β· 0 applications Β· 18d

    Senior Data Scientist

    Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    The company is a trailblazer in the world of data-driven advertising, known for its innovative approach to optimizing ad placements and campaign effectiveness through advanced analytics and machine learning techniques. Our mission is to revolutionize the advertising sector by enabling brands to reach their audiences more effectively.

    About the role:
    We are seeking an experienced and motivated Senior Data Scientist to join our dynamic team. The ideal candidate will have deep expertise in supervised learning, reinforcement learning, and optimization techniques. You will play a pivotal role in developing and implementing advanced machine learning models, driving actionable insights, and optimizing our advertising solutions.
    This position is based in Ukraine. The team primarily works remotely, with occasional in-person meetings in the Kyiv or Lviv office.

    Responsibilities:
    - Develop and implement advanced supervised and reinforcement learning models to improve ad targeting and campaign performance.
    - Collaborate with cross-functional teams to identify opportunities for leveraging machine learning and optimization techniques to solve business problems.
    - Conduct extensive data analysis and feature engineering to prepare datasets for machine learning tasks.
    - Apply optimization algorithms to enhance the effectiveness and efficiency of advertising campaigns.
    - Evaluate and refine existing models to enhance their accuracy, efficiency, and scalability.
    - Utilize statistical techniques and machine learning algorithms to analyze large and complex datasets.
    - Communicate findings and recommendations effectively to both technical and non-technical stakeholders.
    - Stay updated with the latest advancements in machine learning, reinforcement learning, and optimization techniques.
    - Work with engineering teams to integrate models into production systems.
    - Monitor, troubleshoot, and improve the performance of deployed models.
    - Mentor junior data scientists and contribute to the continuous improvement of the data science practice within the company.

    Requirements:
    - 5+ years of experience in data science or machine learning roles, with a strong focus on supervised learning, reinforcement learning, and optimization techniques.
    - Technical Skills:
    - Proficiency in Python.
    - Strong understanding of working with relational databases and SQL.
    - Experience with machine learning libraries such as scikit-learn, TensorFlow, PyTorch, or similar.
    - Deep understanding of statistical modeling and supervised learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, SVMs, gradient boosting, neural networks).
    - Hands-on experience with reinforcement learning algorithms and frameworks like OpenAI Gym.
    - Practical experience with optimization algorithms (linear, non-linear, combinatorial, etc.).
    - Hands-on experience with data manipulation tools and libraries (e.g., pandas, NumPy).
    - Familiarity with cloud services, specifically AWS, is a plus.
    - Practical experience building and managing cloud-based ML pipelines using AWS services (e.g. SageMaker, Bedrock) is a plus.
    - Education:
    - Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. A PhD is a plus.

    Other Skills:
    - Strong analytical and problem-solving skills.
    - Excellent communication skills, with the ability to clearly articulate complex concepts to diverse audiences.
    - Ability to work in a fast-paced environment and manage multiple priorities.
    - Strong organizational skills and attention to detail.
    - Ability to mentor and guide junior data scientists.
    - Must be able to communicate with U.S.-based teams

    The company offers:
    - An opportunity to be at the forefront of advertising technology, impacting major marketing decisions.
    - A collaborative, innovative environment where your contributions make a difference.
    - The chance to work with a passionate team of data scientists, engineers, product managers, and designers.
    - A culture that values learning, growth, and the pursuit of excellence.

    More
  • Β· 41 views Β· 7 applications Β· 18d

    Data Scientist to $6000

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    We are looking for an experienced Data Scientist to join our team. Requirements: 5–8+ years of experience in Data Science/Analytics Strong background in Mathematics, Statistics, or related field Solid knowledge of statistical inference and hypothesis...

    We are looking for an experienced Data Scientist to join our team.

     

    Requirements:

    • 5–8+ years of experience in Data Science/Analytics
    • Strong background in Mathematics, Statistics, or related field
    • Solid knowledge of statistical inference and hypothesis testing (p-values, Z-tests, Chi-Square)
    • Experience with machine learning for insight generation (e.g., clustering, segmentation, prediction)
    • Strong skills in Python and SQL
    • Strong stakeholder management and communication skills
    • Proven experience creating executive-ready reports and PowerPoint presentations
    • Ability to explain complex analytics to non-technical audiences
    • Upper-Intermediate or higher level of English (B2+)

     

    Responsibilities

    • Lead end-to-end analytics: from problem definition to insights and recommendations
    • Apply statistical analysis and machine learning for segmentation, trend analysis, and predictive insights
    • Design and interpret statistical tests (p-values, Z-tests, Chi-Square, confidence intervals)
    • Translate analytical results into clear business narratives
    • Prepare analytical reports and executive-level PowerPoint presentations (core part of the role)Partner with business teams to align analytics with commercial objectives
    • Query, clean, and analyze data using SQL and Python
    • Act as a trusted analytics partner; mentor junior team members when needed

     

    We offer:

    • A full-time job and a long-term contract
    • Flexible working hours
    • Paid vacation and sick leave
    • Managing your taxes and accounting
    • Career and professional growth opportunities
    • Optional benefits package that includes Health insurance, Gym membership, English courses, compensation of certification, courses, and training
    • Creative and lively team of IT specialists, adequate management, and zero unnecessary bureaucracy
    More
  • Β· 20 views Β· 0 applications Β· 19d

    Data Architect (Azure Platform)

    Full Remote Β· Ukraine Β· 10 years of experience Β· English - B2
    Description As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term...

    Description

    As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term strategic goals. Your decisions will form the technical foundation upon which the entire platform is built, from initial batch processing to future real-time streaming capabilities.

    Requirements

    Required Skills (Must-Haves)

    – Cloud Architecture: Extensive experience designing and implementing large-scale data platforms on Microsoft Azure.
    – Expert Technical Knowledge: Deep, expert-level understanding of the Azure data stack, including ADF, Databricks, ADLS, Synapse, and Purview.
    – Data Concepts: Mastery of data warehousing, data modeling (star schemas), data lakes, and both batch and streaming architectural patterns.
    – Strategic Thinking: Ability to align technical solutions with long-term business strategy.

    Nice-to-Have Skills:

    – Hands-on Coding Ability: Proficiency in Python/PySpark, allowing for the creation of architectural proofs-of-concept.
    – DevOps & IaC Acumen: Deep understanding of CI/CD for data platforms and experience with Infrastructure as Code (Bicep/Terraform)/Experience with AzureDevOps for BigData services
    – Azure Cost Management: Experience with FinOps and optimizing the cost of Azure data services.

    Job responsibilities

    – End-to-End Architecture Design: Design and document the complete, end-to-end data architecture, encompassing data ingestion, processing, storage, and analytics serving layers.
    – Technology Selection & Strategy: Make strategic decisions on the use of Azure services (ADF, Databricks, Synapse, Event Hubs) to meet both immediate MVP needs and future scalability requirements.
    – Define Standards & Best Practices: Establish data modeling standards, development best practices, and governance policies for the engineering team to follow.
    – Technical Leadership: Provide expert technical guidance and mentorship to the data engineers and BI developers, helping them solve the most complex technical challenges.
    – Stakeholder Communication: Clearly articulate the architectural vision, benefits, and trade-offs to technical teams, project managers, and senior business leaders.

    More
  • Β· 53 views Β· 3 applications Β· 19d

    Machine Learning Engineer

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Responsibilities Model Fine-Tuning and Deployment: Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock. RAG Workflows: Establish Retrieval-Augmented Generation (RAG) workflows that...

    Responsibilities

     

    Model Fine-Tuning and Deployment:

    Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock.

    RAG Workflows:

    Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.

    MLOps Integration:

    The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.

    Scalable and Customizable Solutions:

    Ensure that both the template and ingestion pipelines are scalable, allowing for adjustments to meet specific customer needs and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.

    End-to-End Workflow Automation:

    Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services like Bedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.

    Advanced Monitoring and Analytics:

    Integrated with AWS CloudWatch, QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.

    Model Monitoring and Maintenance:

    Implement model monitoring to track performance metrics and trigger retraining as necessary.

    Collaboration:

    Work closely with data engineers and DevOps engineers to ensure seamless integration of models into the production pipeline.

    Documentation:

    Document model development processes, deployment procedures, and monitoring setups for knowledge sharing and future reference.

     

    Must-Have Skills

     

    Machine Learning: Strong experience with machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

    MLOps Tools: Proficiency with Amazon SageMaker for model training, deployment, and monitoring.

    Document processing: Experience with document processing for Word, PDF, images.

    OCR: Experience with OCR tools like Tesseract / AWS Textract (preferred)

    Programming: Proficiency in Python, including libraries such as Pandas, NumPy, and Scikit-Learn.

    Model Deployment: Experience with deploying and managing machine learning models in production environments.

    Version Control: Familiarity with version control systems like Git.

    Automation: Experience with automating ML workflows using tools like AWS Step Functions or Apache Airflow.

    Agile Methodologies: Experience working in Agile environments using tools like Jira and Confluence.

     

    Nice-to-Have Skills

     

    LLM: Experience with LLM / GenAI models, LLM Services (Bedrock or OpenAI), LLM abstraction like (Dify, Langchain, FlowiseAI), agent frameworks, rag.

    Deep Learning: Experience with deep learning models and techniques.

    Data Engineering: Basic understanding of data pipelines and ETL processes.

    Containerization: Experience with Docker and Kubernetes (EKS).

    Serverless Architectures: Experience with AWS Lambda and Step Functions.

    Rule engine frameworks: Like Drools or similar

     

    If you are a motivated individual with a passion for ML and a desire to contribute to a dynamic team environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of infrastructure and driving innovation in software delivery processes.

    More
  • Β· 41 views Β· 2 applications Β· 19d

    Senior Data Science Engineer

    Full Remote Β· Poland, Spain, Portugal, Romania, Bulgaria Β· 5 years of experience Β· English - C1
    We are seeking a skilled Senior Data Science Engineer to design, build, and deploy production machine learning solutions for an enterprise Fleet Cascading & Optimization Platform managing 46,000+ vehicles across 545+ locations. In this role, you will...

    We are seeking a skilled Senior Data Science Engineer to design, build, and deploy production machine learning solutions for an enterprise Fleet Cascading & Optimization Platform managing 46,000+ vehicles across 545+ locations. In this role, you will develop and operationalize demand forecasting, cascading optimization, contract intelligence (NLP/Vision), and out-of-spec prediction models with a strong focus on explainability and business impact. You will own the end-to-end ML lifecycle β€” from experimentation and model development to scalable production deployment on AWSβ€”working closely with engineering and business stakeholders to deliver reliable, data-driven outcomes. 

    Must-Have Requirements:

    • Programming & ML Frameworks: Python; PyTorch or TensorFlow; scikit-learn; XGBoost or LightGBM; pandas; NumPy 
    • Time Series & Forecasting: BSTS; Prophet; Temporal Fusion Transformer (TFT); hierarchical forecasting with MinT reconciliation 
    • Optimization: Linear Programming and MILP using tools such as PuLP and OR-Tools; constraint satisfaction; min-cost flow optimization 
    • AWS ML Stack: Amazon SageMaker (Training Jobs, Endpoints, Model Monitor, Clarify, Feature Store, Pipelines) 

    Nice-to-have: 

    • NLP & Document AI: Amazon Textract; LayoutLMv3; Retrieval-Augmented Generation (RAG) pipelines; Amazon Bedrock (Claude); OpenSearch vector databases 
    • Advanced Machine Learning: Graph Neural Networks (GNNs); Deep Reinforcement Learning; Survival Analysis (Cox Proportional Hazards, XGBoost-Survival); attention-based models 
    • Explainability & MLOps: SHAP, LIME, Captum; MLflow; A/B testing; champion/challenger frameworks; model and data drift detection 

    Core Responsibilities:

    • Build demand forecasting models (XGBoost, BSTS, Temporal Fusion Transformer) with hierarchical reconciliation across 545+ locations 
    • Develop cascading optimization using MILP/Min-Cost Flow solvers (PuLP, OR-Tools, Gurobi) and Hybrid ML+Optimization pipelines 
    • Implement document intelligence pipeline: Textract + LayoutLMv3 for document extraction, RAG with Bedrock (Claude) for semantic reasoning 
    • Deploy models on SageMaker with MLOps (Model Monitor, Feature Store, Pipelines); implement SHAP/LIME explainability 

    Models You’ll Build:

    • Demand Forecasting: Gradient-boosted models (XGBoost), Bayesian Structural Time Series (BSTS), and Temporal Fusion Transformers (TFT), including hierarchical reconciliation 
    • Cascading Optimization: Mixed-Integer Linear Programming (MILP) and Min-Cost Flow models, evolving to hybrid ML + solver approaches and advanced Graph Neural Network (GNN) and Deep Reinforcement Learning (DRL) solutions 
    • Document Intelligence: Automated document extraction using Amazon Textract and LayoutLMv3, advancing to Retrieval-Augmented Generation (RAG) pipelines with Amazon Bedrock and Vision-Language Models 
    • Survival & Out-of-Spec Prediction: Kaplan–Meier estimators, Cox Proportional Hazards models, and XGBoost-Survival techniques 

    What we offer:   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 

    About Us:

    Established in 2011, Trinetix is a dynamic tech service provider supporting enterprise clients around the world. 

    Headquartered in Nashville, Tennessee, we have a global team of over 1,000 professionals and delivery centers across Europe, the United States, and Argentina. We partner with leading global brands, delivering innovative digital solutions across Fintech, Professional Services, Logistics, Healthcare, and Agriculture. 

    Our operations are driven by a strong business vision, a people-first culture, and a commitment to responsible growth. We actively give back to the community through various CSR activities and adhere to international principles for sustainable development and business ethics. 

     

    To learn more about how we collect, process, and store your personal data, please review our Privacy Notice: https://www.trinetix.com/corporate-policies/privacy-notice 

     

     

     

    More
  • Β· 116 views Β· 36 applications Β· 23d

    Senior AI / Machine Learning Engineer to $6500

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - B2
    About Tie Tie is building the next generation of identity resolution and marketing intelligence. Our platform connects hundreds of millions of consumers across devices, browsers, and channelsβ€”without relying on cookiesβ€”to power higher deliverability,...

    About Tie

    Tie is building the next generation of identity resolution and marketing intelligence. Our platform connects hundreds of millions of consumers across devices, browsers, and channelsβ€”without relying on cookiesβ€”to power higher deliverability, smarter targeting, and measurable revenue lift for modern marketing teams.

    At Tie, AI is not a featureβ€”it is a core execution advantage. We operate large-scale identity graphs, real-time scoring systems, and production ML pipelines that directly impact revenue, deliverability, and customer growth.

    The Role

    We are looking for a Senior AI / Machine Learning Engineer to design, build, and deploy production ML systems that sit at the heart of our identity graph and scoring platform. You will work at the intersection of machine learning, graph data, and real-time systems, owning models end to endβ€”from feature engineering and training through deployment, monitoring, and iteration.

    This role is highly hands-on and impact-driven. You will help define Tie’s ML architecture, ship models that operate at sub-second latency, and partner closely with platform engineering to ensure our AI systems scale reliably.

    What You’ll Do

    • Design and deploy production-grade ML models for identity resolution, propensity scoring, deliverability, and personalization
    • Build and maintain feature pipelines across batch and real-time systems (BigQuery, streaming events, graph-derived features)
    • Develop and optimize classification models (e.g., XGBoost, logistic regression) with strong handling of class imbalance and noisy labels
    • Integrate ML models directly with graph databases to support real-time inference and identity scoring
    • Own model lifecycle concerns: evaluation, monitoring, drift detection, retraining, and performance reporting
    • Partner with engineering to expose models via low-latency APIs and scalable services
    • Contribute to GPU-accelerated and large-scale data processing efforts as we push graph computation from hours to minutes
    • Help shape ML best practices, tooling, and standards across the team

    What You’ll Bring

    Required Qualifications

    • 5+ years of experience building and deploying machine learning systems in production
    • Strong proficiency in Python for ML, data processing, and model serving
    • Hands-on experience with feature engineering, model training, and evaluation for real-world datasets
    • Ability to travel outside of Ukraine is a must
    • Experience deploying ML models via APIs or services (e.g., FastAPI, containers, Kubernetes)
    • Solid understanding of data modeling, SQL, and analytical workflows
    • Experience working in a cloud environment (GCP, AWS, or equivalent)
    • Experience with graph data, graph databases, or graph-based ML
    • Familiarity with Neo4j, Cypher, or graph algorithms (community detection, entity resolution)

      Preferred / Bonus Experience

    • Experience with XGBoost, tree-based models, or similar classical ML approaches
    • Exposure to real-time or streaming systems (Kafka, Pub/Sub, event-driven architectures)
    • Experience with MLOps tooling and practices (CI/CD for ML, monitoring, retraining pipelines)
    • GPU or large-scale data processing experience (e.g., RAPIDS, CUDA, Spark, or similar)
    • Domain experience in identity resolution, marketing technology, or email deliverability

    Our Technology Stack

    • ML & Data: Python, Pandas, Scikit-learn, XGBoost
    • Graphs: Neo4j (Enterprise, GDS)
    • Cloud: Google Cloud Platform (BigQuery, Vertex AI, Cloud Run, Pub/Sub)
    • Infrastructure: Docker, Kubernetes, GitHub Actions
    • APIs: FastAPI, REST-based inference services

    What We Offer

    • Competitive compensation, including salary, equity, and performance incentives
    • Opportunity to work on core AI systems that directly impact revenue and product differentiation
    • High ownership and autonomy in a senior, hands-on role
    • Remote-first culture with a strong engineering and data focus
    • Exposure to cutting-edge problems in identity resolution, graph ML, and real-time AI systems
    • Clear growth path toward Staff / Principal IC roles

      What else:

    • 4 weeks of paid vacation per year (flexible scheduling)
    • Unlimited sick leave β€” we trust your judgment and care about your health
    • US Bank Holidays off (American calendar)
    • Remote-first culture and flexible working hours
    • Flat structure, no micromanagement, and full ownership
    • Opportunity to make a real impact during a critical growth phase

      Interview Process

    • Recruitment Screening Call
    • Initial call with Head of Data Science & AI and CTO  (30 min) in English
    • Technical deep dive Interview (1,5h)  in English
    • Optional test-task (paid)

      Why Join Us?

    • High-impact delivery leadership role during a critical period
    • Real ownership and autonomy
    • Opportunity to shape delivery across the entire engineering organization
    • Exposure to SaaS, data, integrations, automation, and platform work
    • Collaboration with global teams and vendors
    • A strong product with real scale and momentum

    Why This Role Matters

    At Tie, your work will not live in notebooks or experimentsβ€”it will power production systems used by real customers at scale. You will help define how AI is embedded into the company’s core platform and play a key role in making machine learning a durable competitive advantage.

     

    More
  • Β· 92 views Β· 36 applications Β· 24d

    Data Scientist

    Full Remote Β· Worldwide Β· Product Β· 3 years of experience Β· English - B1
    Almus is looking for a Data Scientist to join our Analytics team and build production-grade machine learning models that directly impact marketing and business performance. You will work on end-to-end ML solutions, from data and features to deployment and...

    Almus is looking for a Data Scientist to join our Analytics team and build production-grade machine learning models that directly impact marketing and business performance.

    You will work on end-to-end ML solutions, from data and features to deployment and monitoring, focusing on improving LTV prediction quality, optimizing ML-driven costs, and driving key metrics such as LTV, ROAS, retention, and CAC. This is an individual contributor role with strong ownership, close collaboration with Marketing, Product, and Data teams, and a clear focus on real business impact.

    Apply to join Almus and take ownership of high-impact data initiatives!

     

    Responsibilities

    • Design, develop, and deploy machine learning models to production
    • Improve product and business decision-making through data-driven approaches
    • Build and evolve end-to-end ML pipelines (data β†’ features β†’ model β†’ inference β†’ monitoring)
    • Drive measurable impact on key product and commercial metrics
    • Standardize ML approaches within the team (best practices, documentation, reproducibility)
    • Provide technical input to the architecture of analytics and ML infrastructure
    • Develop and deploy models that drive growth in LTV, ROAS, retention, and CAC
    • Influence performance and lifecycle marketing strategy
    • Act as a domain expert and collaborate closely with Marketing, Product, and Data Engineering teams

     

    What We Look For

    • 3+ years of experience as a Data Scientist / ML Engineer
    • Experience working with mobile subscription-based products
    • Strong Python skills (production-level code)
    • Solid knowledge of classical machine learning algorithms and practical experience applying them
    • Experience with feature engineering, model evaluation, and bias–variance trade-offs
    • Hands-on experience with marketing models such as LTV, churn, cohort, and funnel modeling
    • Experience with attribution, incrementality, and uplift modeling
    • Strong SQL skills and experience working with analytical datasets
    • Experience with production ML systems and A/B testing
    • English level: Intermediate+

       

    Nice to have

    • Experience with BigQuery
    • MLOps experience (Docker, CI/CD, model registres)
    • Experience working with performance marketing data (Meta, Google Ads, Adjust)
    • Knowledge of causal inference
    • Experience with AutoML and Bayesian models

       

    We Offer

    • Exciting challenges and growth prospects together with an international company
    • High decision-making speed and diverse projects
    • Flexibility in approaches, no processes for the sake of processes
    • Effective and friendly communication at any level
    • Highly competitive compensation package that recognizes your expertise and experience, Performance Review practice to exchange feedback and discuss terms of cooperation
    • Flexible schedule, opportunity to work in a stylish and comfortable office or remotely
    • Respect for work-life balance (holidays, sick days - of course)
    • Bright corporate events and gifts for employees
    • Additional medical insurance
    • Compensation for specialized training and conference attendance
    • Restaurant lunches at the company's expense for those working in the office, endless supplies of delicious food all year round
       
    More
  • Β· 41 views Β· 8 applications Β· 24d

    Data Scientist

    Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· English - None
    Join Burny Games β€” a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players’ minds daily. What makes us proud? In just two years, we’ve launched two successful mobile games worldwide:...

    Join Burny Games β€” a Ukrainian company that creates mobile puzzle games. Our mission is to create top-notch innovative games to challenge players’ minds daily.

    What makes us proud?

    • In just two years, we’ve launched two successful mobile games worldwide: Playdoku and Colorwood Sort. We have paused some projects to focus on making our games better and helping our team improve.
    • Our games have been enjoyed by over 45 million players worldwide, and we keep attracting more players.
    • We’ve created a culture where we make decisions based on data, which helps us grow every month.
    • We believe in keeping things simple, focusing on creativity, and always searching for new and effective solutions.

    What are you working on?

    • Genres: Puzzle, Casual
    • Platforms: Mobile, iOS, Android, Social

    Team size and structure?

    130+ employees

    Key Responsibilities:

    • Build and maintain ML for product and marketing teams
    • Develop predictive systems for personalization, recommendations, and dynamic game content
    • Automate data workflows and create reliable, scalable ML pipelines from feature engineering to deployment
    • Monitor model performance, detect drift, and ensure ongoing accuracy and stability of ML systems
    • Partner with Product, Marketing, and Engineering to integrate ML solutions into live games and operational workflows
    • Own DS/ML projects end-to-end: from defining the problem to production deployment and iteration
    • Share knowledge, conduct code reviews, and promote best practices across the data team

    About You:

    • 4+ years of experience in Data Science or ML, with a track record of delivering production models (2+ years in gamedev or consumer apps businesses)
    • Strong background in statistical modeling, forecasting, and machine learning
    • Advanced programming skills in Python or R (pandas, numpy, scikit-learn, PyTorch/TensorFlow or tidyverse, caret, mlr), writing clean and maintainable code
    • Excellent SQL skills, confident with large-scale datasets and cloud data warehouses (BigQuery, Snowflake, Redshift)
    • Experience deploying, monitoring, and maintaining ML models in production environments
    • Strong problem-solving mindset, able to translate business and product goals into ML solutions
    • Clear communicator who can explain complex models and systems to both technical and non-technical teams
    • Passion for gaming and curiosity about player behavior

    Will Be a Plus:

    • Experience building user-level LTV forecasting models
    • Background in recommender systems, personalization, or contextual bandits
    • Familiarity with MLOps practices and tools
    • Experience with ETL/orchestration frameworks (dbt, Dataform, Airflow)
    • We run on GCP β€” experience with BigQuery, Vertex AI, Pub/Sub, and Cloud Run/Functions

    What we offer:

    • 100% payment of vacations and sick leave [20 days vacation, 22 days sick leave], medical insurance.
    • A team of the best professionals in the games industry.
    • Flexible schedule [start of work from 8 to 11, 8 hours/day].
    • L&D center with courses.
    • Self-learning library, access to paid courses.
    • Stable payments.

    The recruitment process:

    CV review β†’ Interview with TA manager β†’ Interview with Head of Analytics β†’ Final Enterview β†’ Job offer

    If you share our goals and values and are eager to join a team of dedicated professionals, we invite you to take the next step.

    More
  • Β· 26 views Β· 2 applications Β· 25d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B1
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will design and implement a state-of-the-art evaluation and benchmarking framework to measure and guide model quality, and personally train LLMs with a strong focus on Reinforcement Learning from Human Feedback (RLHF). You will work alongside top AI researchers and engineers, ensuring the models are not only powerful but also aligned with user needs, cultural context, and ethical standards.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in machine learning model evaluation and/or NLP benchmarking.
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Solid understanding of RLHF concepts and related techniques (preference modeling, reward modeling, reinforcement learning).
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience creating and managing test datasets, including annotation and labeling processes.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.

    Nice to have:
    Advanced NLP/ML Techniques:
    - Prior work on LLM safety, fairness, and bias mitigation.
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Knowledge of data annotation workflows and human feedback collection methods.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian benchmarks, or familiarity with other evaluation datasets and leaderboards for large models, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Analyze benchmarking datasets, define gaps, and design, implement, and maintain a comprehensive benchmarking framework for the Ukrainian language.
    - Research and integrate state-of-the-art evaluation metrics for factual accuracy, reasoning, language fluency, safety, and alignment.
    - Design and maintain testing frameworks to detect hallucinations, biases, and other failure modes in LLM outputs.
    - Develop pipelines for synthetic data generation and adversarial example creation to challenge the model’s robustness.
    - Collaborate with human annotators, linguists, and domain experts to define evaluation tasks and collect high-quality feedback
    - Develop tools and processes for continuous evaluation during model pre-training, fine-tuning, and deployment.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Analyze benchmarking results to identify model strengths, weaknesses, and improvement opportunities.
    - Work closely with other data scientists to align training and evaluation pipelines.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
Log In or Sign Up to see all posted jobs