Jobs

82
  • Β· 33 views Β· 3 applications Β· 27d

    Head of Data Science

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    United Tech is a global IT product company shaping the future of real-time social connection. With millions of users across North America, Europe, LATAM, and MENA, we build next-gen mobile and web apps for live-streaming and social networking. Our...

    United Tech is a global IT product company shaping the future of real-time social connection.

    With millions of users across North America, Europe, LATAM, and MENA, we build next-gen mobile and web apps for live-streaming and social networking.

    Our platforms enable connection at scale fast, interactive, and deeply engaging.

    The market is projected to exceed $206B by 2030, and we are already leading the evolution.

    Founded in Ukraine, scaling worldwide. Are you in?


    About the role: This is a role for a leader who thrives in a fast-moving product environment, where challenges fuel growth and every decision shapes the future. With United Tech you will have the freedom to design, build, and re-engineer processes from the ground up, working side by side with a team that values initiative, knowledge sharing, and bold goals. Your ideas will directly influence revenue, and you will see the tangible results of your work reflected in key financial metrics. We move fast, we cut through bureaucracy, and we take on complex challenges that push both personal and professional boundaries, limited only by the scale of your ambitions


    In this role, you will

    • Build, scale, and develop the Data Science team, including hiring, mentoring, and performance evaluation
    • Define and execute a Data Science strategy aligned with business priorities
    • Oversee the full lifecycle of DS projects from problem formulation to deploying models into production
    • Prioritize initiatives based on business impact (ROI, time-to-market)
    • Collaborate closely with product managers, analysts, engineers, and C-level executives


    It’s all about you

    • Proven ability to implement best practices in Data Science: reproducibility, A/B testing, ML monitoring
    • Strong track record in maintaining model quality (performance, drift, latency)
    • Advanced Python skills (pandas, sklearn, numpy, xgboost; pytorch/tf is a plus)
    • High-level SQL expertise (large datasets, query optimization)
      Hands-on experience with ML pipelines, orchestration (Airflow, Prefect), and monitoring (Evidently, MLflow, Prometheus)
    • Experience with GCP
    • Solid understanding of A/B testing, causal inference, and statistics
    • Familiarity with architecture fundamentals (API, data pipelines, microservices β€” integration level)
    • Cloud experience with AWS or Azure
    • Knowledge of distributed computation tools (e.g., Spark, CloudRun)
    • Proven track record with generative AI/NLP model deployment
    • Experience in startups or high-growth companies
    • Publications, conference speaking, achievements on Kaggle, or open-source contributions


    Would be a plus

    • Experience optimizing company processes through AI solutions
    • Implementation of AI tools across departments (development, support, marketing, etc.)


    What we offer

    Care and support: 

    • 20 paid vacation days, 15 sick days, and 6 additional days off for family events
    • Up to 10 additional days off for public holidays
    • 100% medical insurance coverage
    • Sports and equipment reimbursement
    • Team building events, corporate gifts, and stylish merch
    • Financial and legal support
    • Position retention and support for those who join the Armed Forces of Ukraine
    • Participation in social initiatives supporting Ukraine
       

    Comfortable working environment:

    • Work from our Kyiv hub or remotely with a flexible schedule 
    • Workspace rental reimbursement in other cities and abroad
    • Modern equipment or depreciation of your own tools
       

    Investment in your future:

    • Collaborate with a highly-skilled team of Middle & Senior professionals, sharing practical cases and expertise in the social networking niche
    • 70% of our heads and leads have grown into their roles here – so can you!
    • Performance-oriented reviews and Individual Development Plans (IDPs)
    • Reimbursement for professional courses and English classes
    • Corporate library, book club, and knowledge-sharing events
       

    Hiring process

    • Intro call
    • Technical Interview
    • Interview with Hiring Manager
    • Final Interview
    • Reference check
    • Offer
    More
  • Β· 34 views Β· 3 applications Β· 25d

    Data Scientist (Benchmarking and Alignment)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    We are seeking an experienced Senior/Middle Data Scientist with a passion for large language models (LLMs) and cutting-edge AI research. In this role, you will design and implement a state-of-the-art evaluation and benchmarking framework to measure and...

    We are seeking an experienced Senior/Middle Data Scientist with a passion for large language models (LLMs) and cutting-edge AI research. In this role, you will design and implement a state-of-the-art evaluation and benchmarking framework to measure and guide model quality, and personally train LLMs with a strong focus on Reinforcement Learning from Human Feedback (RLHF). You will work alongside top AI researchers and engineers, ensuring our models are not only powerful but also aligned with user needs, cultural context, and ethical standards. The benchmarks and feedback loops you own serve as the contract for qualityβ€”gating releases, catching regressions before users do, and enabling compliant, trustworthy features to ship with confidence.

     

    What you will do

    • Analyze benchmarking datasets, define gaps, and design, implement, and maintain a comprehensive benchmarking framework for the Ukrainian language.
    • Research and integrate state-of-the-art evaluation metrics for factual accuracy, reasoning, language fluency, safety, and alignment.
    • Design and maintain testing frameworks to detect hallucinations, biases, and other failure modes in LLM outputs.
    • Develop pipelines for synthetic data generation and adversarial example creation to challenge the model’s robustness.
    • Collaborate with human annotators, linguists, and domain experts to define evaluation tasks and collect high-quality feedback.
    • Develop tools and processes for continuous evaluation during model pre-training, fine-tuning, and deployment.
    • Research and develop best practices and novel techniques in LLM training pipelines.
    • Analyze benchmarking results to identify model strengths, weaknesses, and improvement opportunities.
    • Work closely with other data scientists to align training and evaluation pipelines.
    • Document methodologies and share insights with internal teams.

     

    Qualifications and experience needed

    Education & Experience:

    • 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    • Proven experience in machine learning model evaluation and/or NLP benchmarking.
    • An advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.

    NLP Expertise:

    • Good knowledge of natural language processing techniques and algorithms.
    • Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    • Familiarity with LLM training and fine-tuning techniques.

    ML & Programming Skills:

    • Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    • Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    • Solid understanding of RLHF concepts and related techniques (preference modeling, reward modeling, reinforcement learning).
    • Ability to write efficient, clean code and debug complex model issues.

    Data & Analytics:

    • Solid understanding of data analytics and statistics.
    • Experience creating and managing test datasets, including annotation and labeling processes.
    • Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    • Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.

    Deployment & Tools:

    • Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    • Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    • Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training is a plus.

    Communication:

    • Experience working in a collaborative, cross-functional environment.
    • Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies.

     

    A plus would be

    Advanced NLP/ML Techniques:

    • Prior work on LLM safety, fairness, and bias mitigation.
    • Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    • Knowledge of data annotation workflows and human feedback collection methods.

    Research & Community:

    • Publications in NLP/ML conferences or contributions to open-source NLP projects.
    • Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicates a passion for staying at the forefront of the field.

    Domain & Language Knowledge:

    • Familiarity with the Ukrainian language and context.
    • Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    • Knowledge of Ukrainian benchmarks, or familiarity with other evaluation datasets and leaderboards for large models, can be an advantage given our project’s focus.

    MLOps & Infrastructure:

    • Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    • Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.

    Problem-Solving:

    • Innovative mindset with the ability to approach open-ended AI problems creatively.
    • Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

     

    What we offer:

    • Office or remote β€” it’s up to you. You can work from anywhere, and we will arrange your workplace.
    • Remote onboarding.
    • Performance bonuses for everyone (annual or quarterly β€” depends on the role).
    • We train employees with the opportunity to learn through the company’s library, internal resources, and programs from partners.β€―
    • Health and life insurance.
    • Wellbeing program and corporate psychologist.
    • Reimbursement of expenses for Kyivstar mobile communication.
    More
  • Β· 62 views Β· 1 application Β· 13d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 73 views Β· 6 applications Β· 19d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 66 views Β· 2 applications Β· 19d

    Senior/Middle Data Scientist (Data Preparation, Pre-training)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 18 views Β· 1 application Β· 24d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· C1 - Advanced
    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations. About the Role As a Data Scientist, you will: Collaborate with...

    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations.


    About the Role

    As a Data Scientist, you will:

    • Collaborate with Product Owners, Data Analysts, Data Engineers, and ML Engineers to design, develop, deploy, and monitor scalable AI/ML products.
    • Lead initiatives to integrate Generative AI into business processes.
    • Work closely with business stakeholders to understand challenges and deliver tailored data-driven solutions.
    • Monitor model performance and implement improvements.
    • Apply best practices in data science and ML for sustainable, high-quality results.
    • Develop and fine-tune models with a strong focus on accuracy and business value.
    • Leverage cutting-edge technologies to drive innovation and efficiency.
    • Stay updated on advancements in AI and data science, applying new techniques to ongoing processes.


    About the Candidate

    We are looking for a professional with strong analytical and technical expertise.


    Must have:

    • 3+ years of hands-on experience in Data Science and ML.
    • Experience with recommendation systems and prescriptive analytics.
    • Proficiency in Python, SQL, and ML libraries/frameworks.
    • Proven experience developing ML models and applying statistical methods.
    • Familiarity with containerization and orchestration tools.
    • Excellent communication skills and strong command of English.
    • Bachelor’s or Master’s degree in Computer Science, Statistics, Physics, or Mathematics.


    Nice to have:

    • Experience with Snowflake.
    • Exposure to Generative AI and large language models.
    • Knowledge of AWS services.
    • Familiarity with NLP models (including transformers).
    More
  • Β· 40 views Β· 2 applications Β· 16d

    3D Computer Vision Engineer (3D Reconstruction)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems...

    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer  (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems that redefine how machines see the world.

    This is not a plug-and-play role. You’ll be building the core reconstruction engine that powers our vision products - designing a modular, blazing-fast pipeline where algorithms can be swapped in and out like precision-tuned gears. Think COLMAP on steroids, fused with neural rendering, and optimized for scale.

        Core Tech Stack:

    • Languages: C++, Python
    • CV/3D Libraries: OpenCV, Open3D, PCL, COLMAP
    • Math/Utils: NumPy, Eigen
    • Visualization: Plotly, Matplotlib
    • Deep Learning: PyTorch, TensorFlow
    • Data: Point clouds, meshes, multi-view image sets.
       

      Desired Expertise: 

    • Core Expertise: Deep, hands-on knowledge of 3D computer vision fundamentals, including projective geometry, triangulation, transformations, and camera models.
    • Algorithm Mastery: Proven experience with point cloud and mesh processing algorithms, such as ICP for registration and refinement.
    • Development Experience: Strong software engineering skills, primarily in a Linux environment. Experience deploying applications on Windows (or via WSL) is a major plus.
    • Data Handling: Experience managing and analyzing the large datasets typical in 3D reconstruction.
      Projective Geometry Mastery: Camera models, projections, triangulation, multi-sensor fusion.
    • Transformations: Rotations, quaternions, coordinate system conversions, 3D frame manipulations.
    • SfM & MVS: Proven hands-on with pipelines and dense reconstructions.
    • SLAM: Bundle adjustment, pose graph optimization, loop closure.
    • Code Craft: Strong software engineering chops - designing modular, performant, production-grade systems.
    • Visualization: Proficiency with 3D visualization tools and libraries (e.g., OpenGL, Blender scripting) for rendering and debugging point clouds and meshes.
    • Bonus Points:
      - You’ve built a full 3D reconstruction pipeline from scratch.

      - Hands-on with Gaussian Splatting or NeRFs.

      - Experience with SuperGlue or other state-of-the-art feature matching models.

        - Hybrid reconstruction experience: fusing classical geometry      with neural methods.

       - Experience with real-time or streaming reconstruction systems.

       - Familiarity with emerging topics like 3D scene segmentation and the application of LLMs to geometric data.

    What You’ll Do:

    • End-to-End Pipeline Development: You will architect, build, and deploy a robust, high-performance 3D reconstruction pipeline from multi-view imagery. This includes owning and optimizing all core modules: feature detection, matching, camera pose estimation, SfM, dense stereo (MVS), and mesh/surface generation.
    • System Architecture: Design a highly modular and scalable system that allows for interchangeable components, facilitating rapid A/B testing between classical geometric algorithms and modern neural approaches.
    • Performance Optimization: Profile and optimize the entire pipeline for low-latency, real-time performance. This involves advanced GPU programming (CUDA/OpenCL), efficient memory management to handle large models, and leveraging modern compute frameworks.
    • Research & Integration: Stay at the forefront of academic and industry research. You will be responsible for identifying, implementing, and integrating state-of-the-art methods in SLAM, neural rendering (NeRFs, 3DGS), and hybrid geometry-neural network models.
    • Data Management: Develop solutions for handling, processing, and distributing large-scale image and 3D datasets (e.g., using tools like Rclone).

      Why Join DeepX?

      This is your chance to own a core engine at the frontier of 3D vision. You’ll be surrounded by a small but elite team, working on real-world deployments where your algorithms won’t just run in benchmarks - they’ll run in airports, mines, logistics hubs, and beyond. If you want your code to shape how machines perceive the world at scale, this is the place.

       

      Sounds like you? -> Let’s talk
      Send us your portfolio, GitHub, or projects - we love seeing real reconstructions more than polished CVs.
       

    More
  • Β· 64 views Β· 0 applications Β· 10d

    Senior Computer Vision Engineer (3D Perception)

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more ...

    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more
     

    Our client builds autonomous yard trucks that replace human drivers to make logistics yards safer, faster, and more efficient
     

    Your mission: make the AI stack see in 3D and drive correctlyβ€”every time

     

    Responsibilities:

    • Invent & build computer vision algorithms with a focus on 3D object detection and scene reconstruction
    • Own productionization: collaborate with ML/Robotics/Platform teams to ship robust, low-latency vision models
    • Push the frontier: work hands-on with modern 3D vision, Deep Learning, and RL/VLA-aware pipelines

       

    Requirements:

    • Strong background in Computer Vision and Deep Learning
    • Hands-on experience with RGB-based 3D detection and occupancy networks
    • Proficiency with PyTorch or TensorFlow
    • Experience in 3D reconstruction, point clouds, or related domains
    • Python mastery; solid problem-solving and analytical skills
    • Upper-Intermediate English (written & spoken)
    • Experience with inference optimization (e.g., quantization/acceleration) is a plus

     

    Would be a plus:

    • Knowledge or practical experience with Reinforcement Learning
    • Work with edge devices & sensors (cameras, LiDAR, radar, etc.)
    • Background in automotive or robotics
    • C++/CUDA familiarity

     

    What We Offer:

    • Real autonomy impact. Work on vehicles that use RL, Vision-Language-Action (VLA) and many other latest architectures
    • Access to DGX H200 systems and large-scale GPU clusters
    • Startup pace, research mindset, and a goal-oriented team
    • Newest technical equipment and modern MLOps
    • 20 working days of paid vacation, English courses, educational events & conferences, and medical insurance
    • Your engineering will directly influence how autonomous vehicles perceive and act
    More
  • Β· 132 views Β· 29 applications Β· 28d

    Machine Learning Engineer for an Online News Media

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are seeking a Machine Learning Engineer to enhance our data science and ML capabilities, supporting current and upcoming projects. The ideal candidate will have strong expertise in developing, deploying, and optimizing predictive models and working...

    We are seeking a Machine Learning Engineer to enhance our data science and ML capabilities, supporting current and upcoming projects. The ideal candidate will have strong expertise in developing, deploying, and optimizing predictive models and working with modern ML platforms in a cloud environment. You will collaborate closely with product and engineering teams to deliver data-driven solutions that directly impact business outcomes.

    Benefits:

    • Opportunity to work with cutting-edge ML platforms and cloud technologies
    • Supportive and collaborative international team
    • Flexible working schedule
    • Fully remote work with the ability to work from anywhere

     

    Requirements:

    • Proven experience in building and optimizing machine learning models for predictive analytics, regression, forecasting, and categorization tasks.
    • Strong proficiency in Python, including libraries such as pandas, NumPy, scikit-learn, TensorFlow, or PyTorch, for end-to-end model development and deployment.
    • Hands-on expertise with cloud ML platforms like Google AutoML or AWS SageMaker to accelerate and scale model training.
    • Solid knowledge of data processing with tools such as SQL, BigQuery, or Spark, including data wrangling, feature engineering, and managing large-scale workflows.
    • Practical understanding of model lifecycle management, including version control, monitoring, retraining, and performance optimization.
    • Experience integrating ML solutions into cloud-based data pipelines and APIs, ensuring production-level reliability.
    • Familiarity with MLOps best practices, including CI/CD workflows for ML, containerization with Docker, and orchestration with Kubernetes.
    • Strong background in statistical modeling, experimentation, and A/B testing to validate and continuously improve model performance.

     

    Responsibilities:

    • Develop, evaluate, and optimize ML models for predictive analytics, forecasting, and categorization tasks
    • Build and maintain scalable data pipelines and integrate ML solutions into production environments
    • Apply feature engineering, data wrangling, and statistical modeling techniques to improve model performance
    • Manage model lifecycle (monitoring, retraining, versioning) and ensure scalability
    • Collaborate with product managers, engineers, and analysts to align business requirements with technical solutions
    • Participate in A/B testing and experimentation to validate performance of ML models
    • Contribute to MLOps best practices and improve CI/CD workflows for ML projects

    About client:

    A multifaceted digital media company dedicated to helping citizens, consumers, business leaders, and policy officials make important decisions in their lives. The company publishes independent reporting, rankings, data journalism, and advice that has earned the trust of readers and users for nearly 90 years. They are an American media company that publishes news, consumer advice, rankings, and analysis. It was launched in 1948 as the merger of a domestic-focused weekly newspaper and an international-focused weekly magazine. In 1995, the company launched its website, and in 2010 the magazine ceased printing. They reach more than 40 million people monthly during moments when they are most in need of expert advice and are motivated to act on that advice directly on their platforms.

    Industry:

    Online Media

    Location:

    United States

    About project:

    An American media company that publishes news, opinion, consumer advice, rankings, and analysis. Founded as a news magazine in 1933, it transitioned to primarily web-based publishing in 2010, although it still publishes its rankings. It covers politics, education, health, money, careers, travel, technology, and cars.

    More
  • Β· 59 views Β· 18 applications Β· 5d

    Middle Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).
    More
  • Β· 41 views Β· 2 applications Β· 19d

    Reinforcement Learning Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more ...

    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more
     

    Our client builds autonomous yard trucks that replace human drivers to make logistics yards safer, faster, and more efficient
     

    Your mission: make the AI stack see in 3D and drive correctlyβ€”every time

    Responsibilities:

    • Research and develop advanced reinforcement learning (RL) algorithms and their applications
    • Build, simulate, and test RL environments for real-world deployment scenarios
    • Apply modern deep learning techniques, including transformers and computer vision, to RL problems
    • Optimize inference performance for large-scale RL solutions

     

    Requirements:

    • Have a strong background in Reinforcement Learning and Deep Learning
    • Hands-on experience with simulation and environments for RL
    • Proven production experience in RL, including deployment cases
    • Deep understanding of RL techniques (e.g., policy gradients, actor-critic, Q-learning)
    • Experience with transformers and computer vision
    • Upper-Intermediate English (written & spoken) for documentation and collaboration
    • Strong knowledge of Python
    • Experience with inference optimization

     

    Would be a plus:

    • Experience with Vision-Language Architectures (VLAs)
    • Work with edge devices and sensors (e.g., camera, lidar, radar, etc.)
    • Experience in automotive or robotics
    • Knowledge of C++/CUDA

     

    What We Offer:

    • Real autonomy impact. Work on vehicles that use RL, Vision-Language-Action (VLA) and many other latest architectures
    • Access to DGX H200 systems and large-scale GPU clusters
    • Startup pace, research mindset, and a goal-oriented team
    • Newest technical equipment and modern MLOps
    • 20 working days of paid vacation, English courses, educational events & conferences, and medical insurance
    • Your engineering will directly influence how autonomous vehicles perceive and act
    More
  • Β· 52 views Β· 1 application Β· 5d

    Data Science Team Lead

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI solutions that drive real results....

    Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI solutions that drive real results. We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.

    We’re looking for a hands-on Data Science Team Lead to build and scale production AI/ML solutions on AWS while leading a Data Scientist team. You’ll own the full lifecycle β€”from discovery and proof-of-concepts to training, optimization, deployment, and iteration in production β€” partnering closely with customers and cross-functional teams.

     

    If you are interested in this opportunity, please submit your CV in English.

     

    Responsibilities

    • People leadership: Manage and coach a team of Data Scientists, set clear goals, run 1:1s, and support career growth and technical excellence.
    • Delivery ownership: Drive project scoping, planning, and on-time, high-quality delivery from inception to production. Proactively remove blockers, manage risks, and communicate status to stakeholders.
    • Customer engagement: Work directly with founders/technical leaders to understand goals, translate them into feasible AI roadmaps, and ensure measurable business outcomes.
    • Model Development & Deployment: Deploy and train models on AWS SageMaker (using TensorFlow/PyTorch).
    • Model Tuning & Optimization: Fine-tune and optimize models using techniques like quantization and distillation, and tools like Pruna.ai and Replicate.
    • Generative AI Solutions: Design and implement advanced GenAI solutions, including prompt engineering and retrieval-augmented generation (RAG) strategies.
    • LLM Workflows: Develop agentic LLM workflows that incorporate tool usage, memory, and reasoning for complex problem-solving.
    • Scalability & Performance: Maximize model performance on AWS’s by leveraging techniques such as model compilation, distillation, and quantization and using AWS specific features.
    • Collaboration: Work closely with other teams (Data Engineering, DevOps, MLOps, Solution Architects, Sales teams) to integrate models into production pipelines and workflows.

       

    Requirements

    • Team management experience: 2+ years leading engineers or scientists (people or tech lead).
    • Technical experience: 5–6+ years in Data Science/ML (including deep learning/LLMs).
    • Excellent customer-facing skills to understand and address client needs effectively.
    • Expert in Python and deep learning frameworks (PyTorch/TensorFlow), and hands-on with AWS ML services (especially SageMaker and Bedrock).
    • Proven experience with generative AI and fine-tuning large language models.
    • Experience deploying ML solutions on AWS cloud infrastructure and familiarity with MLOps best practices.
    • Fluent written and verbal communication skills in English.
    • A master’s degree in a relevant field and AWS ML certifications are a plus.

       

    Benefits

    • Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
    • International work environment
    • Referral program – enjoy cooperation with your colleagues and get a bonus 
    • Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
    • English classes
    • Soft skills training

       

    Country-specific benefits will be discussed during the hiring process.

    Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognizing the value you bring to our team.

    More
  • Β· 265 views Β· 62 applications Β· 5d

    Data Scientist

    Full Remote Β· Worldwide Β· 1 year of experience Β· B1 - Intermediate
    Junior Data Scientist / Data Engineer | Blockchain | On-Chain Analytics Location: Remote Type: Full-time Everstake is the largest blockchain network validator in Ukraine and one of the top players in global Web3 infrastructure. We support dozens of...

    Junior Data Scientist / Data Engineer | Blockchain | On-Chain Analytics
    Location: Remote
    Type: Full-time
     

    Everstake is the largest blockchain network validator in Ukraine and one of the top players in global Web3 infrastructure. We support dozens of networks worldwide and are scaling fast.

    We are looking for an exceptional Data Scientist / Data Engineer to join our team. The ideal candidate has a strong passion for blockchain technology and a proven track record of leveraging data to drive insights and business decisions.


    About the Role

    You’ll be responsible for conducting deep analysis of on-chain data, developing hypotheses to improve profitability, and creating tools to automate data collection and analysis. Your work will have a direct impact on strategy and decision-making at Everstake.


    What You’ll Do

    • Conduct deep analysis of on-chain data to identify trends, patterns, and insights.
    • Propose and test hypotheses that can increase company revenue and efficiency.
    • Collaborate with cross-functional teams to translate data into actionable recommendations.
    • Build and maintain tools and scripts for data collection, processing, and analysis.
    • Create reports, dashboards, and visualizations (e.g., PowerBI, Tableau) using blockchain data.
    • Communicate findings clearly to both technical and non-technical stakeholders.
    • Stay up-to-date with blockchain innovations and apply them to on-chain analytics.
       

    What We’re Looking For

    • Bachelor’s or Student degree in Computer Science, Data Science, or a related field.
    • Proficiency with SQL or SQL-like query languages.
    • Experience working with large datasets and data analysis libraries (e.g., Pandas, NumPy, SciPy).
    • Strong background in blockchain technology and on-chain data analysis.
    • Hands-on experience with data visualization tools (PowerBI, Tableau, or similar).
    • Excellent analytical, problem-solving, and critical-thinking skills.
    • Strong written and verbal communication skills.
       

    Nice to Have

    • Experience with big data frameworks (e.g., Spark, Hadoop).
    • Familiarity with Web3 analytics platforms and APIs.
    • Contributions to blockchain / crypto open-source projects.
    • Knowledge of statistical modeling or machine learning methods.
    More
  • Β· 43 views Β· 3 applications Β· 30d

    Senior Data Scientist to $5500

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 8 years of experience Β· C1 - Advanced
    We are seeking an experienced and highly skilled Senior Data Scientist to drive data-driven decision-making and innovation. In this role, you will leverage your expertise in advanced analytics, machine learning, and big data technologies to solve complex...

    We are seeking an experienced and highly skilled Senior Data Scientist to drive data-driven decision-making and innovation. In this role, you will leverage your expertise in advanced analytics, machine learning, and big data technologies to solve complex business challenges. You will be responsible for designing predictive models, building scalable data pipelines, and uncovering actionable insights from structured and unstructured datasets. Collaborating with cross-functional teams, your work will empower strategic decision-making and foster a data-driven culture across the organization.

    Desired candidates possess:

    Technical Skills:

    • Proficiency in Python, R, or other data science programming languages.
    • Strong knowledge of machine learning libraries and frameworks (e.g., Scikit-learn, TensorFlow, PyTorch).
    • Advanced SQL skills for querying and managing relational databases.
    • Experience with big data technologies (e.g., Spark, Hadoop) and cloud platforms (AWS, Azure, GCP), preferably MS Azure.
    • Familiarity with data visualization tools such as Power BI, Tableau, or equivalent, preferably MS Power BI.

    Analytical and Problem-solving Skills:

    • Expertise in statistical modeling, hypothesis testing, and experiment design.
    • Strong problem-solving skills to address business challenges through data-driven solutions.
    • Ability to conceptualize and implement metrics/KPIs tailored to business needs.

    Soft Skills:

    • Excellent English communication skills to translate complex technical concepts into business insights.
    • Collaborative mindset with the ability to work in cross-functional teams.
    • Proactive and detail-oriented approach to project management and execution.

    Education and Experience:

    • Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
    • 8+ years of experience in data science, advanced analytics, or a similar field.
    • Proven track record of deploying machine learning models in production environments.

    Key Responsibilities:

    1. Data Exploration and Analysis:
    • Collect, clean, and preprocess large and complex datasets from diverse sources, including SQL databases, cloud platforms, and APIs.
    • Perform exploratory data analysis (EDA) to identify trends, patterns, and relationships in data.
    • Develop meaningful KPIs and metrics tailored to business objectives.

    2. Advanced Modeling and Machine Learning:

    • Design, implement, and optimize predictive and prescriptive models using statistical techniques and machine learning algorithms.
    • Evaluate model performance and ensure scalability and reliability in production.
    • Work with both structured and unstructured data for tasks such as text analysis, image processing, and recommendation systems.

    3. Data Engineering and Automation:

    • Build and optimize scalable ETL pipelines for data processing and feature engineering.
    • Collaborate with data engineers to ensure seamless integration of data science solutions into production environments.
    • Leverage cloud platforms (e.g., AWS, Azure, GCP) for scalable computation and storage.

    4. Data Visualization and Storytelling:

    • Communicate complex analytical findings effectively through intuitive visualizations and presentations.
    • Create dashboards and visualizations using tools such as Power BI, Tableau, or Python libraries (e.g., Matplotlib, Seaborn, Plotly).
    • Translate data insights into actionable recommendations for stakeholders.

    5. Cross-functional Collaboration and Innovation:

    • Partner with business units, product teams, and data engineers to define project objectives and deliver impactful solutions.
    • Stay updated with emerging technologies and best practices in data science, machine learning, and AI.
    • Contribute to fostering a data-centric culture within the organization by mentoring junior team members and promoting innovative approaches.

    What we offer:

    Employment according to the Labor Code of Ukraine (with full tax compensation) or PE;

    • Monthly income: Salary β€” $4500-$5500;
    • Bonuses + Referral program;
    • Paid vacation per annum (the β€œAnnual Vacation”), paid Sick leave;
    • Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions

    Grab your chance to join us, and send us your CV in English!

    More
  • Β· 39 views Β· 7 applications Β· 30d

    Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - Advanced
    We are seeking a highly skilled Machine Learning Engineer with expertise in Deep Learning, Natural Language Processing (NLP), and Large Language Models (LLMs). You will be responsible for designing, building, and deploying advanced ML models and...

    We are seeking a highly skilled Machine Learning Engineer with expertise in Deep Learning, Natural Language Processing (NLP), and Large Language Models (LLMs). You will be responsible for designing, building, and deploying advanced ML models and pipelines, ensuring scalability, performance, and production readiness. The ideal candidate has strong research knowledge combined with hands-on engineering skills to deliver intelligent, enterprise-grade AI solutions.

     

    Details:
    Location: Remote in EU
    Employment Type: Full-Time, B2B Contract
    Start Date: ASAP
    Language Requirements: Fluent English

     

    Key Responsibilities

    • Design, develop, and optimize ML models with a focus on deep learning, NLP, and LLM-based applications.
    • Build scalable pipelines for training, fine-tuning, evaluation, and deployment of models.
    • Work with frameworks such as PyTorch, TensorFlow, and Hugging Face Transformers.
    • Fine-tune and adapt pre-trained LLMs (GPT, BERT, LLaMA, etc.) for domain-specific tasks.
    • Develop solutions for text classification, summarization, embeddings, RAG, and conversational AI.
    • Ensure model scalability, robustness, and low-latency performance in production environments.
    • Collaborate with data engineers to prepare and optimize large-scale datasets.
    • Implement MLOps practices (CI/CD, monitoring, retraining, governance).
    • Participate in code reviews, documentation, and technical knowledge sharing.

     

    Requirements

    • 5+ years of experience in machine learning, with at least 3+ years focused on deep learning/NLP.
    • Strong expertise in PyTorch or TensorFlow, and NLP frameworks (Hugging Face, spaCy, NLTK).
    • Hands-on experience with LLMs (GPT, T5, LLaMA, Falcon, etc.), fine-tuning and prompt engineering.
    • Proficiency in Python and libraries (NumPy, Pandas, Scikit-learn).
    • Experience with MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML).
    • Strong understanding of transformer architectures, embeddings, and attention mechanisms.
    • Familiarity with cloud platforms (AWS, Azure, GCP) for ML deployment.
    • Excellent problem-solving and debugging skills.

     

    Nice to Have

    • Experience with vector databases (Pinecone, Weaviate, Milvus) for semantic search.
    • Knowledge of retrieval-augmented generation (RAG) pipelines.
    • Exposure to multimodal ML (text + image/audio/video).
    • Contributions to open-source ML/NLP projects.
    • Advanced degree (MSc/PhD) in Computer Science, AI, or related field.
    • Industry background in fintech, healthcare, telecom, or e-commerce.
    More
Log In or Sign Up to see all posted jobs