Jobs

100
  • Β· 22 views Β· 0 applications Β· 16d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 35 views Β· 1 application Β· 16d

    Senior/Middle Data Scientist (Data Preparation, Pre-training)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 32 views Β· 2 applications Β· 16d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· C1 - Advanced
    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations. About the Role As a Data Scientist, you will: Collaborate with...

    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations.


    About the Role

    As a Data Scientist, you will:

    • Collaborate with Product Owners, Data Analysts, Data Engineers, and ML Engineers to design, develop, deploy, and monitor scalable AI/ML products.
    • Lead initiatives to integrate Generative AI into business processes.
    • Work closely with business stakeholders to understand challenges and deliver tailored data-driven solutions.
    • Monitor model performance and implement improvements.
    • Apply best practices in data science and ML for sustainable, high-quality results.
    • Develop and fine-tune models with a strong focus on accuracy and business value.
    • Leverage cutting-edge technologies to drive innovation and efficiency.
    • Stay updated on advancements in AI and data science, applying new techniques to ongoing processes.


    About the Candidate

    We are looking for a professional with strong analytical and technical expertise.


    Must have:

    • 3+ years of hands-on experience in Data Science and ML.
    • Experience with recommendation systems and prescriptive analytics.
    • Proficiency in Python, SQL, and ML libraries/frameworks.
    • Proven experience developing ML models and applying statistical methods.
    • Familiarity with containerization and orchestration tools.
    • Excellent communication skills and strong command of English.
    • Bachelor’s or Master’s degree in Computer Science, Statistics, Physics, or Mathematics.


    Nice to have:

    • Experience with Snowflake.
    • Exposure to Generative AI and large language models.
    • Knowledge of AWS services.
    • Familiarity with NLP models (including transformers).
    More
  • Β· 163 views Β· 29 applications Β· 16d

    Strong Junior Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· B1 - Intermediate
    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now seeking a Junior Data Scientist to...

    In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better.
    We are now seeking a Junior Data Scientist to play a key role in reshaping the way we deliver our solutions.

    What you will do

    • Conduct Exploratory Data Analysis (EDA) to uncover hidden patterns and formulate hypotheses that shape the modeling strategy.
    • Design and analyze A/B tests to measure the impact of your models and ideas.
    • Train and evaluate predictive models, (feature engineering/ hyperparameter tuning), for challenges like demand forecasting and price elasticity estimation.
    • Map business requirements into well-defined machine learning problems under consultancy.
    • Communicate complex model outputs as clear, actionable insights for business stakeholders.
       

    You have:

    • SQL basics.
    • A strong math background (Computer Science-related education is preferred).
    • Scientific python toolkit (NumPy, pandas, scikit-learn, Keras / TensorFlow or PyTorch).
    • Deep understanding of ML basics: overfitting, metrics, cross-validation, hyperparameter tuning, classification of ML tasks and models (classification, regression, clustering etc.).
    • Good communication English skills (Intermediate+).
    • 1+ year of hands-on experience in a data science.

    Pleasant extras:

    • Proven graduation from ML/AI MOOCs (Coursera, etc.).
    • Participation in ML competitions (i.e. Kaggle).

    Soft skills:

    • Analytical mindset and critical thinking to solve complex problems.
    • Agile approach, with the ability to experiment and test hypotheses in a dynamic business environment.
    • Business-oriented thinking, capable of translating complex models into clear business insights.
    • Curiosity and a drive for continuous learning in the data domain.
    • Strong team player, able to collaborate across cross-functional teams.

    You’re gonna love it, and here’s why:

    • Rich innovative software stack, freedom to choose the best suitable technologies.
    • Remote-first ideology: freedom to operate from the home office or any suitable coworking.
    • Flexible working hours (we start from 8 to 11 am) and no time tracking systems on.
    • Regular performance and compensation reviews.
    • Recurrent 1-1s and measurable OKRs.
    • In-depth onboarding with a clear success track.
    • Competera covers 70% of your training/course fee.
    • 20 vacation days, 15 days off, and up to one week of paid Christmas holidays.
    • 20 business days of sick leave.
    • Partial medical insurance coverage.
    • We reimburse the cost of coworking.

    Drive innovations with us. Be a Competerian.

    More
  • Β· 37 views Β· 13 applications Β· 15d

    Senior Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - Advanced
    Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride...

    Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride in our ability to innovate and push the boundaries of what's possible.

    As an ML Engineer, you’ll be provided with all opportunities for development and growth.

    Let's work together to build a better future for everyone!

    Requirements:

    • Comfortable with standard ML algorithms and underlying math.
    • Strong hands-on experience with LLMs in production, RAG architecture, and agentic systems
    • AWS Bedrock experience strongly preferred
    • Practical experience with solving classification and regression tasks in general, feature engineering.
    • Practical experience with ML models in production.
    • Practical experience with one or more use cases from the following: NLP, LLMs, and Recommendation engines.
    • Solid software engineering skills (i.e., ability to produce well-structured modules, not only notebook scripts).
    • Python expertise, Docker.
    • English level - strong Intermediate.
    • Excellent communication and problem-solving skills.

    Will be a plus:

    • Practical experience with cloud platforms (AWS stack is preferred, e.g. Amazon SageMaker, ECR, EMR, S3, AWS Lambda).
    • Practical experience with deep learning models.
    • Experience with taxonomies or ontologies.
    • Practical experience with machine learning pipelines to orchestrate complicated workflows.
    • Practical experience with Spark/Dask, Great Expectations.

    Responsibilities:

    • Create ML models from scratch or improve existing models. 
    • Collaborate with the engineering team, data scientists, and product managers on production models.
    • Develop experimentation roadmap. 
    • Set up a reproducible experimentation environment and maintain experimentation pipelines.
    • Monitor and maintain ML models in production to ensure optimal performance.
    • Write clear and comprehensive documentation for ML models, processes, and pipelines.
    • Stay updated with the latest developments in ML and AI and propose innovative solutions.
    More
  • Β· 129 views Β· 5 applications Β· 13d

    Information Research Specialist (No Experience Needed, Training Provided) to $700

    Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· B2 - Upper Intermediate
    Intetics Inc., a global technology company providing custom software application development, distributed professional teams, software product quality assessment, and Β«all-things-digitalΒ» solutions, is looking for an Incident Editorial Specialist to join...

    Intetics Inc., a global technology company providing custom software application development, distributed professional teams, software product quality assessment, and Β«all-things-digitalΒ» solutions, is looking for an Incident Editorial Specialist to join our dynamic team.

    Join our night shift editorial team in Lviv and Kyiv and work on real-time traffic incident editing for a global mapping platform.

    No prior experience required β€” we provide full training. It’s a great way to start your career in IT and gain experience on international projects.

    This is a full-time, remote role. You’ll work with English-language data, applying clear rules and maintaining focus during night shifts.

     

    Requirements:

    • B2+ level English proficiency
    • High attention to detail
    • Confident working with multiple tabs/windows, fast typing
    • Basic familiarity with Google Maps, traffic/navigation apps

       

    Benefits:

    • 8.7-hour fixed night shift, including:
      • 8 paid working hours
      • 40min unpaid break
    • Shifts scheduled between 19:00 and 07:00 (exact shift hours depend on assignment)
    • 5 shifts per week, including possible weekend rotations
    • Paid vacation and official holidays in line with company policy
    More
  • Β· 43 views Β· 3 applications Β· 12d

    Data Science Engineer / AI Agent Systems Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions. ...

    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions.

     

    Requirements:

    - AI/ML: 2+ years hands-on with LLM APIs, production deployment of at least one AI system

    - Experience with LangChain, CrewAI, or AutoGen (one is enough)

    - Understanding of prompt engineering (Chain-of-Thought, ReAct) and tool/function calling

    - Python: 3+ years experience, strong fundamentals, Flask/FastAPI, async/await, REST APIs

    - Production Experience: built systems running in production, handled logging, testing, error handling

    - Cloud experience with AWS / GCP / Azure (one is enough)

    - Familiar with Git, CI/CD, databases (PostgreSQL/MySQL)

     

    Nice to Have:

    Experience with vector databases (Pinecone, Weaviate)

    Docker/containerization knowledge

    Fintech or financial services background

    Advanced ML/AI education or certifications

    What You’ll Work On:

    - Designing and deploying AI-powered systems using LLMs (OpenAI, Anthropic, etc.)

    - Building agent-based solutions with frameworks like LangChain, CrewAI, or AutoGen

    - Integrating AI systems with external APIs, databases, and production services

    - Writing clean, tested Python code and deploying services to the cloud

    - Collaborating with stakeholders to translate business requirements into technical solutions

    Project

    A system for automating accounting operations for companies, which reads, analyzes, compares, and interacts with accounting data. The goal is to make processes faster, more accurate, and scalable, minimize manual work, and increase client efficiency.

     

    Project stage: MVP is nearly complete; the next step is to automate the MVP and scale the product.

    More
  • Β· 48 views Β· 3 applications Β· 12d

    Data Science Engineer / AI Agent Systems Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions. ...

    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions.

     

    Requirements:

    - AI/ML: 2+ years hands-on with LLM APIs, production deployment of at least one AI system

    - Experience with LangChain, CrewAI, or AutoGen (one is enough)

    - Understanding of prompt engineering (Chain-of-Thought, ReAct) and tool/function calling

    - Python: 3+ years experience, strong fundamentals, Flask/FastAPI, async/await, REST APIs

    - Production Experience: built systems running in production, handled logging, testing, error handling

    - Cloud experience with AWS / GCP / Azure (one is enough)

    - Familiar with Git, CI/CD, databases (PostgreSQL/MySQL)

     

    Nice to Have:

    Experience with vector databases (Pinecone, Weaviate)

    Docker/containerization knowledge

    Fintech or financial services background

    Advanced ML/AI education or certifications

    What You’ll Work On:

    - Designing and deploying AI-powered systems using LLMs (OpenAI, Anthropic, etc.)

    - Building agent-based solutions with frameworks like LangChain, CrewAI, or AutoGen

    - Integrating AI systems with external APIs, databases, and production services

    - Writing clean, tested Python code and deploying services to the cloud

    - Collaborating with stakeholders to translate business requirements into technical solutions

     

    Project

    A system for automating accounting operations for companies, which reads, analyzes, compares, and interacts with accounting data. The goal is to make processes faster, more accurate, and scalable, minimize manual work, and increase client efficiency.

     

    Project stage: MVP is nearly complete; the next step is to automate the MVP and scale the product.

    More
  • Β· 42 views Β· 2 applications Β· 9d

    Senior Data Scientist to $9000

    Full Remote Β· Ukraine, Poland, Portugal, Bulgaria Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role:

    As a Data Scientist you’ll play a critical role in shaping and enhancing our AI-driven pricing platform. 

     

    Key Responsibilities:

    • Develop and Optimize Advanced ML Models: Build, improve, and deploy machine learning and statistical models for forecasting demand, analyzing price elasticities, and recommending optimal pricing strategies.
    • Lead End-to-End Data Science Projects: Own your projects fully, from conceptualization and experimentation through production deployment, monitoring, and iterative improvement.
    • Innovate with Generative and Predictive AI Solutions: Leverage state-of-the-art generative and predictive modeling techniques to automate complex pricing scenarios and adapt to rapidly changing market dynamics.

    Required Competence and Skills:

    • A Master’s or PhD in Computer Science, Physics, Applied Mathematics or a related field, demonstrating a strong foundation in analytical thinking.
    • At least 5 years of professional experience in end-to-end machine learning lifecycle (design, development, deployment, and monitoring).
    • At least 5 years of professional experience with Python development, including OOP, writing production-grade code, testing, and optimization.
    • At least 5 years of experience with data mining, statistical analysis, and effective data visualization techniques.
    • Deep familiarity with modern ML/DL methods and frameworks (e.g., PyTorch, XGBoost, scikit-learn, statsmodels).
    • Strong analytical skills combined with practical experience interpreting model outputs to drive business decisions.

    Nice-to-Have:

    • Practical knowledge of SQL and experience with large-scale data systems like Hadoop or Spark.
    • Familiarity with MLOps tools and practices (CI/CD, model monitoring, data version control).
    • Experience in reinforcement learning and Monte-Carlo methods.
    • A solid grasp of microeconomic principles, including supply and demand dynamics, price elasticity, as well as econometrics.
    • Experience with cloud services and platforms, preferably AWS.
    More
  • Β· 45 views Β· 0 applications Β· 9d

    Data Scientist

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    On behalf with our customer we are seeking for Data Scientist for customer-facing projects that combine data science with deep knowledge and understanding of machine learning and big data technologies to create solutions for customers’ challenges and...

    On behalf with our customer we are seeking for Data Scientist for customer-facing projects that combine data science with deep knowledge and understanding of machine learning and big data technologies to create solutions for customers’ challenges and needs, defining and developing appropriate technical and business solutions.

     

    Our customer is the leading provider of AI-based Big Data analytics.

    They are dedicated to helping financial organizations combat financial crimes through money laundering and facilitating malicious crimes such as terrorist financing, narco-trafficking, and human trafficking which negatively impact the global economy.

     

    Key Responsibilities

    • Deliver successful deployment and Pilots
    • Manage and design the Customer-specific technical solution throughout the project life cycle
    • Technical leadership of Data Science & Engineering aspects and team members including partners
    • Work on various data sources and apply sophisticated feature engineering capabilities
    • Bring and use business knowledge
    • Extract insights and actionable recommendations from large volumes of data and Investigate anomalies in Big Data
    • Build and manage technical relationships with customers and partners
    • Provide product requirements input to the Product Management team
    • Train the customers on the system – system usage and monitoring aspects
    • Travel to customer locations both domestically and abroad

       

    Position Requirements

    • 3+ years of experience as a Data Engineer, Data Scientist, or Big Data Developer
    • Hands-on experience with Apache Spark, Python/PySpark, and SQL
    • Familiarity with Hadoop ecosystem (Hive, Impala, HDFS, Sqoop) and data pipeline optimization
    • Practical experience building or integrating AI agents and working with LLMs
    • Strong skills in data transformation, ML feature engineering, and analytics for financial services
    • Experience with workflow automation tools (Airflow, MLflow, n8n) and version control (GIT)
    • Ability to work with customers, train teams, and deliver technical solutions
    • English level B2 and higher

       

    What we can offer you

    • Remote work from Ukraine or EU countries with flexible schedule
    • Accounting support & consultation
    • Opportunities for learning and developing on the project
    • 20 working days of annual vacation
    • 5 days paid sick leaves/days off; state holidays
    • Provide working equipment

     

    More
  • Β· 53 views Β· 21 applications Β· 9d

    Data Scientist

    Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

     

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

     

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.
    •  

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.
    •  

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).
    More
  • Β· 46 views Β· 10 applications Β· 9d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B1 - Intermediate
    Focuses on raw data processing, creation of connectors, normalization. Lays the foundation for AI and R&D-oriented. About the Product The product is an AI-powered platform built for the iGaming sector, focused on improving user retention and engagement....
    • Focuses on raw data processing, creation of connectors, normalization.
    • Lays the foundation for AI and R&D-oriented.
       

    About the Product
    The product is an AI-powered platform built for the iGaming sector, focused on improving user retention and engagement. It provides casino platforms with tools such as personalized interactions, workflow automations, and AI assistants. The platform acts as a retention layer across the player lifecycle, helping predict, prevent, and personalize key moments - from onboarding to churn, through smart automation and AI.

    More
  • Β· 25 views Β· 0 applications Β· 8d

    3D Computer Vision Engineer (3D Reconstruction)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems...

    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer  (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems that redefine how machines see the world.

    This is not a plug-and-play role. You’ll be building the core reconstruction engine that powers our vision products - designing a modular, blazing-fast pipeline where algorithms can be swapped in and out like precision-tuned gears. Think COLMAP on steroids, fused with neural rendering, and optimized for scale.

        Core Tech Stack:

    • Languages: C++, Python
    • CV/3D Libraries: OpenCV, Open3D, PCL, COLMAP
    • Math/Utils: NumPy, Eigen
    • Visualization: Plotly, Matplotlib
    • Deep Learning: PyTorch, TensorFlow
    • Data: Point clouds, meshes, multi-view image sets.
       

      Desired Expertise: 

    • Core Expertise: Deep, hands-on knowledge of 3D computer vision fundamentals, including projective geometry, triangulation, transformations, and camera models.
    • Algorithm Mastery: Proven experience with point cloud and mesh processing algorithms, such as ICP for registration and refinement.
    • Development Experience: Strong software engineering skills, primarily in a Linux environment. Experience deploying applications on Windows (or via WSL) is a major plus.
    • Data Handling: Experience managing and analyzing the large datasets typical in 3D reconstruction.
      Projective Geometry Mastery: Camera models, projections, triangulation, multi-sensor fusion.
    • Transformations: Rotations, quaternions, coordinate system conversions, 3D frame manipulations.
    • SfM & MVS: Proven hands-on with pipelines and dense reconstructions.
    • SLAM: Bundle adjustment, pose graph optimization, loop closure.
    • Code Craft: Strong software engineering chops - designing modular, performant, production-grade systems.
    • Visualization: Proficiency with 3D visualization tools and libraries (e.g., OpenGL, Blender scripting) for rendering and debugging point clouds and meshes.
    • Bonus Points:
      - You’ve built a full 3D reconstruction pipeline from scratch.

      - Hands-on with Gaussian Splatting or NeRFs.

      - Experience with SuperGlue or other state-of-the-art feature matching models.

        - Hybrid reconstruction experience: fusing classical geometry      with neural methods.

       - Experience with real-time or streaming reconstruction systems.

       - Familiarity with emerging topics like 3D scene segmentation and the application of LLMs to geometric data.

    What You’ll Do:

    • End-to-End Pipeline Development: You will architect, build, and deploy a robust, high-performance 3D reconstruction pipeline from multi-view imagery. This includes owning and optimizing all core modules: feature detection, matching, camera pose estimation, SfM, dense stereo (MVS), and mesh/surface generation.
    • System Architecture: Design a highly modular and scalable system that allows for interchangeable components, facilitating rapid A/B testing between classical geometric algorithms and modern neural approaches.
    • Performance Optimization: Profile and optimize the entire pipeline for low-latency, real-time performance. This involves advanced GPU programming (CUDA/OpenCL), efficient memory management to handle large models, and leveraging modern compute frameworks.
    • Research & Integration: Stay at the forefront of academic and industry research. You will be responsible for identifying, implementing, and integrating state-of-the-art methods in SLAM, neural rendering (NeRFs, 3DGS), and hybrid geometry-neural network models.
    • Data Management: Develop solutions for handling, processing, and distributing large-scale image and 3D datasets (e.g., using tools like Rclone).

      Why Join DeepX?

      This is your chance to own a core engine at the frontier of 3D vision. You’ll be surrounded by a small but elite team, working on real-world deployments where your algorithms won’t just run in benchmarks - they’ll run in airports, mines, logistics hubs, and beyond. If you want your code to shape how machines perceive the world at scale, this is the place.

       

      Sounds like you? -> Let’s talk
      Send us your portfolio, GitHub, or projects - we love seeing real reconstructions more than polished CVs.
       

    More
  • Β· 32 views Β· 2 applications Β· 8d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    We are seeking an experienced Data Scientist with expertise in Large Language Models (LLMs) such as GPT, Claude, and related technologies to join our team in Ukraine. The ideal candidate will have a strong background in natural language processing (NLP),...

    We are seeking an experienced Data Scientist with expertise in Large Language Models (LLMs) such as GPT, Claude, and related technologies to join our team in Ukraine. The ideal candidate will have a strong background in natural language processing (NLP), machine learning, and deep learning models. They will play a critical role in developing and deploying cutting-edge LLM applications to drive innovation across our product lines.

    Responsibilities:

    β€Œ

    • Design, develop and optimize Large Language Models for various NLP tasks such as text generation, summarization, translation, and question-answering
    • Conduct research and experiments to push the boundaries of LLM capabilities and performance
    • Collaborate with cross-functional teams (engineering, product, research) to integrate LLMs into product offerings
    • Develop tools, pipelines and infrastructure to streamline LLM training, deployment and monitoring
    • Analyze and interpret model outputs, investigate errors/anomalies, and implement strategies to improve accuracy
    • Stay current with the latest advancements in LLMs, NLP and machine learning research
    • Communicate complex technical concepts to both technical and non-technical stakeholders
       

    Requirements:

    β€Œ

    • MS or PhD degree in Computer Science, Data Science, AI, or a related quantitative field
    • 4+ years of hands-on experience developing and working with deep learning models, especially in NLP/LLMs
    • Expert knowledge of Python, PyTorch, TensorFlow, and common deep learning libraries
    • Strong understanding of language models, attention mechanisms, transformers, sequence-to-sequence modeling
    • Experience training and fine-tuning large language models
    • Proficiency in model deployment, optimization, scaling and serving
    • Excellent problem-solving, analytical and quantitative abilities
    • Strong communication skills to present technical information clearly
    • Ability to work collaboratively in a team environment
    • Fluency in Ukrainian and English

       

    Preferred:

    • Research experience in LLMs, NLP, machine learning
    • Experience working with multi-modal data (text, image, audio)
    • Knowledge of cloud platforms like AWS, GCP for model training
    • Understanding of MLOps and production ML workflows
    • Background in information retrieval, knowledge graphs, reasoning
    More
  • Β· 29 views Β· 4 applications Β· 5d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are seeking an experienced Data Scientist with a strong background in healthcare analytics, predictive modeling, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and deploying advanced machine learning models...

    We are seeking an experienced Data Scientist with a strong background in healthcare analytics, predictive modeling, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and deploying advanced machine learning models that drive actionable insights for patient engagement, clinical decision-making, and healthcare operations. As a team leader, this role will also guide junior data scientists and collaborate closely with engineers, clinicians, and business stakeholders.

     

    Responsibilities

    • Lead the design, development, and deployment of predictive models and advanced analytics solutions.
    • Define the machine learning strategy for healthcare data use cases, ensuring scalability, compliance, and measurable impact.
    • Collaborate with data engineers to ensure robust, high-quality data pipelines for model training and deployment.
    • Translate business and clinical challenges into analytical problems and deliver actionable insights.
    • Partner with product, clinical, and technology teams to integrate ML-driven insights into patient engagement and workflow optimization.
    • Mentor and coach junior data scientists, fostering best practices in modeling, experimentation, and documentation.
    • Ensure compliance with data governance and healthcare privacy standards (HIPAA, HITECH).
    • Drive innovation by staying current with state-of-the-art ML techniques and evaluating their applicability to healthcare.

       

    Experience Requirements:

    Advanced Data Science & Machine Learning (5+ years):

    • Proven track record designing, training, and deploying machine learning models for predictive analytics, patient segmentation, and outcome forecasting.
    • Hands-on experience applying statistical modeling, NLP, and time-series forecasting to healthcare datasets.
    • Strong ability to translate business challenges into analytical problems and deliver measurable value.

    Healthcare Data Expertise (3+ years):

    • Solid background working with healthcare data (EHR/EMR, patient records, lab results, claims data).
    • Familiarity with healthcare standards and compliance (HIPAA, HITECH).
    • Experience addressing challenges with sensitive and regulated healthcare datasets.

    Cloud & Data Platforms (5+ years):

    • Proven experience with AWS services (S3, Redshift, SageMaker, Glue, Athena, Lambda) for model development, training, and deployment.
    • Ability to design scalable ML pipelines in cloud environments.
    • Familiarity with integrating Salesforce data into analytics workflows.

     

    Required Skills:

    • Expertise in Python and data science libraries (pandas, NumPy, scikit-learn, TensorFlow, PyTorch).
    • Strong SQL skills and familiarity with data warehouses (Redshift, Snowflake, BigQuery).
    • Experience operationalizing ML models (MLOps) using AWS SageMaker, MLflow, or similar frameworks.
    • Ability to communicate complex analytical results to non-technical stakeholders and healthcare professionals.
    • Proven experience collaborating with data engineers to ensure data pipelines meet analytical requirements.
    • Strong leadership skills with experience mentoring data scientists and managing cross-functional analytics projects.

     

    Preferred Qualifications:

    • Advanced degree (Master’s or PhD) in Data Science, Statistics, Computer Science, or related field.
    • Prior experience in healthcare analytics or life sciences.
    • Experience applying machine learning to Salesforce data for patient engagement and workflow optimization.
    • Knowledge of data visualization tools (Tableau, Power BI, or similar).

     

    We offer:

    • Remote work;
    • Flexible schedule and ability to manage your working hours;
    • Competitive salary;
    • Working in a team of skilled and experienced specialists;
    • Opportunities for professional development.
    More
Log In or Sign Up to see all posted jobs