Jobs

80
  • Β· 14 views Β· 0 applications Β· 2d

    Machine Learning Engineer

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Responsibilities Model Fine-Tuning and Deployment: Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock. RAG Workflows: Establish Retrieval-Augmented Generation (RAG) workflows that...

    Responsibilities

     

    Model Fine-Tuning and Deployment:

    Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock.

    RAG Workflows:

    Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.

    MLOps Integration:

    The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.

    Scalable and Customizable Solutions:

    Ensure that both the template and ingestion pipelines are scalable, allowing for adjustments to meet specific customer needs and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.

    End-to-End Workflow Automation:

    Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services like Bedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.

    Advanced Monitoring and Analytics:

    Integrated with AWS CloudWatch, QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.

    Model Monitoring and Maintenance:

    Implement model monitoring to track performance metrics and trigger retraining as necessary.

    Collaboration:

    Work closely with data engineers and DevOps engineers to ensure seamless integration of models into the production pipeline.

    Documentation:

    Document model development processes, deployment procedures, and monitoring setups for knowledge sharing and future reference.

     

    Must-Have Skills

     

    Machine Learning: Strong experience with machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

    MLOps Tools: Proficiency with Amazon SageMaker for model training, deployment, and monitoring.

    Document processing: Experience with document processing for Word, PDF, images.

    OCR: Experience with OCR tools like Tesseract / AWS Textract (preferred)

    Programming: Proficiency in Python, including libraries such as Pandas, NumPy, and Scikit-Learn.

    Model Deployment: Experience with deploying and managing machine learning models in production environments.

    Version Control: Familiarity with version control systems like Git.

    Automation: Experience with automating ML workflows using tools like AWS Step Functions or Apache Airflow.

    Agile Methodologies: Experience working in Agile environments using tools like Jira and Confluence.

     

    Nice-to-Have Skills

     

    LLM: Experience with LLM / GenAI models, LLM Services (Bedrock or OpenAI), LLM abstraction like (Dify, Langchain, FlowiseAI), agent frameworks, rag.

    Deep Learning: Experience with deep learning models and techniques.

    Data Engineering: Basic understanding of data pipelines and ETL processes.

    Containerization: Experience with Docker and Kubernetes (EKS).

    Serverless Architectures: Experience with AWS Lambda and Step Functions.

    Rule engine frameworks: Like Drools or similar

     

    If you are a motivated individual with a passion for ML and a desire to contribute to a dynamic team environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of infrastructure and driving innovation in software delivery processes.

    More
  • Β· 97 views Β· 6 applications Β· 25d

    Data Scientist / Quantitative Researcher

    Full Remote Β· Worldwide Β· Product Β· 3 years of experience
    We are Onicore β€” fintech company specializing in developing products for cryptocurrency operations. Registered in the USA, our company is powered by a talented Ukrainian team, working across the globe. Now we’re on the hunt for a specialist who will...

    We are Onicore β€” fintech company specializing in developing products for cryptocurrency operations. 

    Registered in the USA, our company is powered by a talented Ukrainian team, working across the globe.
     

    πŸ“Š Now we’re on the hunt for a specialist who will drive the project of algorithmic trading

     

    Your skills:

    - 3+ years of experience in Data Science;

    - excellent command of Python, understanding of the principles of OOP;

    - deep knowledge in linear algebra, probability theory and mathematical statistics;

    - data collection and preprocessing (numpy, pandas, scikit-learn,ta-lib);

    - experience working with all types of classical machine learning (Supervised Learning, Unsupervised Learning, Reinforcement Learning);

    - development experience and deep understanding of the principles of the architectures: RNN, LSTM, GRU, CNN, Transformer in the field of analysis and prediction of time sequences (time series predictions);

    - confident use of both high-level and low-level APIs for TensorFlow (writing custom training loops, custom metrics & loss_functions). 

    Knowledge of PyTorch is welcome;

    - the ability to visualize the learning process using TensorBoard;

    - boosting neural networks (Distributed XGBoost/LightGBM);

    - visualization of results (matplotlib, seaborn).

     

    Would be a plus:

    - experience with currency markets;

    - PhD degree in the field of data science / machine learning.

     

    Your responsibilities:

    ● solving algorithmic trading problems: regression/autoregression, classification of timeseries/financial series, working with cryptocurrency quotes.


    What’s in it for you? 

    πŸ₯ Health first: Comprehensive medical insurance.

    πŸ€“ Keep growing: We cover courses, conferences, training sessions, and workshops.

    πŸ’ͺ Stay active mentally and physically : Sports / hobby / personal psychologist to fuel yourself.

    πŸ’Ό We've got your back: Access to legal assistance when you need it.

    πŸ§—β€β™‚οΈ Inspiring vibes: Join a motivated, goal-oriented team that supports each other.

    πŸ§‘β€πŸ’» Make a difference: Have a direct impact on shaping and growing the product.

    πŸ’» Work smarter: Corporate laptops to help you do your best work.


    Join our team and help us level up!

    More
  • Β· 46 views Β· 1 application Β· 9d

    GenAI Consultant

    Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    EPAM GenAI Consultants are changemakers who bridge strategy and technologyβ€”applying agentic intelligence, RAG, and multimodal AI to transform how enterprises operate, serve users, and make decisions. Preferred Tech stack Programming Languages...

    EPAM GenAI Consultants are changemakers who bridge strategy and technologyβ€”applying agentic intelligence, RAG, and multimodal AI to transform how enterprises operate, serve users, and make decisions. 

     

    Preferred Tech stack 

     

     Programming Languages 

    • Python (*) 
    • TypeScript 
    • Rust 
    • Mojo 
    • Go 

     

     Fine-Tuning & Optimization 

    • LoRA (Low-Rank Adaptation) 
    • PEFT (Parameter-Efficient Fine-Tuning) 

     

    Foundation & Open Models 

    • OpenAI (GPT series), Anthropic Claude Family, Google Gemini, Grok (*, at least one of them ) 
    • Llama 
    • Falcon 
    • Mistral 

     

    Inference Engines  

    • VLLM 

     

     Prompting & Reasoning Paradigms (*) 

    • CoT (Chain of Thought) 
    • ToT (Tree of Thought) 
    • ReAct (Reasoning + Acting) 
    • DSPy 

     

    Multimodal AI Models 

    • CLIP (*) 
    • BLIP2 
    • Whisper 
    • LLaVA 
    • SAM (Segment Anything Model) 

     

     Retrieval-Augmented Generation (RAG) 

    • RAG (core concept) (*) 
    • RAGAS (RAG evaluation and scoring) (*) 
    • Haystack (RAG orchestration & experimentation) 
    • LangChain Evaluation (LCEL Eval) 

     

    Agentic Frameworks 

     

    • CrewAI  (*) 
    • AutoGen, AutoGPT, LangGraph, Semantic Kernel, LangChain (* at least  2 of them) 
    • Prompt ToolsPromptLayer, PromptFlow (Azure),  Guidance by Microsoft (* at least one of them) 

     

    Evaluation & Observability 

    • RAGAS – Quality metrics for RAG (faithfulness, context precision, etc.) (*) 
    • TruLens – LLM eval with attribution and trace inspection (*) 
    • EvalGAI – GenAI evaluation testbench 
    • Giskard – Bias and robustness testing for NLP 
    • Helicone – Real-time tracing and logging for LLM apps 
    • HumanEval – Code generation correctness testing 
    • OpenRAI – Evaluation agent orchestration 
    • PromptBench – Prompt engineering comparison 
    • Phoenix by Arize AI – Multimodal and LLM observability 
    • Zeno – Human-in-the-loop LLM evaluation platform 
    • LangSmith – LangChain observability and evaluation 
    • WhyLabs – Data drift and model behavior monitoring 

     

    Explainability & Interpretability (understanding) 

    • SHAP 
    • LIME 

     

    Orchestration & Experimentation (*) 

    • MLflow 
    • Airflow 
    • Weights & Biases (W&B) 
    • LangSmith 

     

     Infrastructure & Deployment 

    • Kubernetes 
    • Amazon SageMaker 
    • Microsoft Azure AI 
    • Goggle Vertex AI  
    • Docker 
    • Ray Serve (for distributed model serving) 

     

    Responsibilities 

    • Lead GenAI discovery workshops with clients
    • Design Retrieval-Augmented Generation (RAG) systems and agentic workflows
    • Deliver PoCs and MVPs using LangChain, LangGraph, CrewAI , Semantic Kernel,  DSPy, RAGAS 
    • Ensure Responsible AI principles in deployments (bias, fairness, explainability) 
    • Support RFPs, technical demos, and GenAI architecture narratives 
    • Reuse of accelerators/templates for faster delivery 
    • Governance & compliance setup for enterprise-scale AI 
    • Use of evaluation frameworks to close feedback loops 

     

    Requirements 

    • Consulting: Experience in exploring the business problem and converting it to applied AI technical solutions; expertise in pre-sales, solution definition activitiesβ€―
    • Data Science: 3+ years of hands-on experience with core Data Science, as well as knowledge of one of the advanced Data Science and AI domains (Computer Vision, NLP, Advanced Analytics etc.)β€―β€― 
    • Engineering: Experience delivering applied AI from concept to production, familiarity, and experience with MLOps, Data, design of Data Analytics platforms, data engineering, and technical leadership 
    • Leadership: Track record of delivering complex AI-empowered and/or AI-empowering programs to clients in a leadership position. Experience in managing and growing a team to scale up Data Science, AI, and ML capabilities is a big plus. 
    • Excellent communication skills (active listening, writing and presentation), drive for problem solving and creative solutions, high EQ 
    • Experience with LLMOps or GenAIOps tooling (e.g., guardrails, tracing, prompt tuning workflows) 
    • Understanding of the importance of AI products evaluation is a must 
    • Knowledge of cloud GenAI platforms (AWS Bedrock, Azure OpenAI, GCP Vertex AI) 
    • Understanding of data privacy, compliance, and Governance in GenAI (GDPR, HIPAA, SOC2, RAI, etc.) 
    • In-depth understanding of a specific industry or a broad range of industries. 

     

    More
  • Β· 129 views Β· 17 applications Β· 3d

    Machine Learning Specialist – Price Prediction

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About Us Cherry is a fast-growing market place for buying, selling, leasing, and auctioning commercial vehicles and equipment. We’re building an industry-first valuation engine that helps our users make informed decisions in seconds. Now, we’re looking...

    About Us

    Cherry is a fast-growing market place for buying, selling, leasing, and auctioning commercial vehicles and equipment. We’re building an industry-first valuation engine that helps our users make informed decisions in seconds. Now, we’re looking for a Machine Learning Engineer who can bring strong expertise in predictive modeling, and ideally document automation using AI.

    Role Overview

    We are seeking an experienced ML specialist to build models that predict market prices for trucks and equipment based on auction results, historical sales, and vehicle specifications. This role will also involve setting up automated data pipelines, scraping structured and semi-structured data, and optionally working on generating intelligent reports or documents from data.

    Key Responsibilities

    • Build and refine price prediction models using data from auction sites, dealership listings, and historical records.
    • Design and maintain data scraping pipelines will be a plus (e.g., BeautifulSoup, Selenium, Scrapy) to gather auction and sale data from multiple public sources.
    • Clean, normalize, and store data efficiently for training and inference.
    • Apply feature engineering techniques on specs like make, model, mileage, year, VIN, etc.
    • Work closely with product and engineering teams to deploy models in production.
    • (Optional but valued): Use NLP or generative AI to create documents or listing descriptions automatically.

    Requirements

    • Proven experience in machine learning with a focus on regression/predictive models.
    • Strong Python skills; familiar with tools like Scikit-learn, XGBoost, LightGBM, TensorFlow, or PyTorch.
    • Experience in web scraping is a plus (BeautifulSoup, Scrapy, etc.).
    • Familiarity with model evaluation metrics for regression (e.g., RMSE, MAE).
    • Comfortable working with structured data (CSV, JSON, APIs) and preprocessing pipelines.
    • Fluent in Git and version control workflows.
    • Experience deploying or working with models in production (FastAPI, Flask, AWS/GCP preferred).

    Nice to Have

    • Familiarity with automated document generation, AI agents, or LLM APIs (OpenAI, Langchain).

       

    More
  • Β· 13 views Β· 0 applications Β· 3d

    Computer Vision Engineer (slam, vio)

    Ukraine Β· Product Β· 3 years of experience MilTech πŸͺ–
    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms. The ideal candidate will have experience with SLAM, Visual-Inertial Odometry (VIO), and sensor...

    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms.

    The ideal candidate will have experience with SLAM, Visual-Inertial Odometry (VIO), and sensor fusion.

    We consider engineers at Middle/Senior levels β€” tasks and responsibilities will be adjusted accordingly.

     

    Required Qualifications:

    • 3+ years of hands-on experience with classical computer vision
    • Knowledge of popular computer vision networks and components 
    • Understanding of geometrical computer vision principles
    • Hands-on experience in implementing low-level CV algorithms
    • Practical experience with SLAM and/or Visual-Inertial Odometry (VIO)
    • Proficiency in C++
    • Experience with Linux
    • Ability to quickly navigate through recent research and trends in computer vision.

    Nice to Have:

    • Experience with Python
    • Familiarity with neural networks and common CV frameworks/libraries (OpenCV, NumPy, PyTorch, ONNX, Eigen, etc.)
    • Experience with sensor fusion.
    More
  • Β· 25 views Β· 3 applications Β· 19d

    Senior Game Mathematician

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
     

    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

    Main areas of work:

    • Betting/Gambling Platform Software Development β€” software development that is easy to use and personalized for each customer.
    • Highload Development β€” development of highly loaded services and systems.
    • CRM System Development β€” development of a number of services to ensure a high level of customer service, effective engagement of new customers and retention of existing ones.
    • Big Data β€” development of complex systems for processing and analysis of big data.
    • Cloud Services β€” we use cloud technologies for scaling and business efficiency.

     

    Responsibilities:

    • Developing and design the math side of casino games, mostly slot machines
    • Determine and calculate the probabilities, build game behavior and properties
    • Cooperate with product managers and developers
    • Come up with new and innovative ideas and also be aware of existing features in the industry
    • Maintenance of existing games
    • Work closely with development teams

     

    Requirements:

    • BS/MS degree in Mathematics, statistics or similar disciplines with a very strong mathematical skill
    • Experience in gaming industry 2+ years
    • Extremely details oriented, fast learning and highly motivated person
    • Creative, productive, working as part of a team, responsible
    • Basic programming knowledge
    • Advanced programming β€” advantage
    • Strong communication skills

     

    We offer:

    • 30 day off β€” we value rest and recreation;
    • Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership;
    • Remote work or the opportunity β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station);
    • Flexible work schedule β€” we expect a full-time commitment but do not track your working hours;
    • Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

     

    During the war, the company actively supports the Ministry of Digital Transformation of Ukraine in the initiative to deploy an IT army and has already organized its own cyber warfare unit, which makes a crushing blow to the enemy’s IT infrastructure 24/7, coordinates with other cyber volunteers and plans offensive actions on its IT front line.

    More
  • Β· 19 views Β· 1 application Β· 17d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· 3 years of experience Β· C1 - Advanced
    PwC is a network of over 370,000 employees in 149 countries focused on providing the highest quality services in the areas of audit, tax advisory, consulting and technology development. What we offer: - Official employment; - Remote work opportunity; -...

    PwC is a network of over 370,000 employees in 149 countries focused on providing the highest quality services in the areas of audit, tax advisory, consulting and technology development.

    What we offer:
    - Official employment;
    - Remote work opportunity;
    - Annual performance and grade review;
    - A Dream team of experienced colleagues and high-class specialists;
    - Language courses (English & Polish languages);
    - Soft skills development;
    - Personal development plan and career coach;
    - Corporate events and team-buildings.

    Main responsibilities:

    • Developing innovative solutions for our clients by leveraging cutting-edge data science, machine learning, and AI technologies;
    • Developing intelligent assistants using the latest large language models (e.g., GPT-4, Falcon 2, LLAMA 3, Mixtral), employing Retrieval Augmented Generation techniques, and utilizing agent frameworks (e.g., Langraph, CrewAI);
    • Utilizing AI expertise to recommend the most effective technical approaches and solution architectures for addressing business challenges;
    • Leading data science project teams of 1-5 members, managing small to medium projects, and overseeing parts of larger engagements under senior supervision;
    • Working closely with PwC industry experts, clients, and higher management while actively participating in the proposal-making process within your area of expertise;
    • Communicating complex insights in a clear and actionable manner to non-technical colleagues and clients.

     

    Requirements:

    • 3+ years of relevant professional experience;
    • Solid knowledge of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures;
    • Knowledge and experience in production grade code development in Python;
    • Solid knowledge of SQL;
    • Experience with LLMs and related concepts (e.g. RAG, vector DBs, AI agents);
    • Understanding of cloud concepts and architectures, with hands-on experience in cloud services (GCP, AWS, Azure);
    • Knowledge of CI/CD and DevOps practices;
    • Experience with deploying code with Docker / Kubernetes;
    • Strong interpersonal and communication skills β€” essential in day-to-day cooperation with clients and the team;
    • Outstanding supervision and mentorship abilities;
    • Graduate of Economics, Econometrics, Quantitative Methods, Computer Science, Math, Physics, Operational Research or related discipline;
    • Excellent analytical and problem-solving skills, including the ability to independently disaggregate issues, identify root causes and recommend solutions to business problems;
    • Proficiency in English, both written and spoken.

       

      Nice to have: 

    • Familiarity with MLOps tools (e.g., Azure AI Studio, AzureML, Vertex.AI, SageMaker, MLFlow);
    • Knowledge of an extra programming language (e.g. C#, Go, Java);
    • Knowledge of Natural Language Processing techniques;
    • Experience in banking, retail or consulting;
    • Experience in leading project teams.

     

    Why PwC?

    We are not just numbers and reports. PwC is the impact you can create through your actions. Our team will help you achieve more, and we are ready to start this journey with you.

     

    Ready for a challenge? Send your resume and join the team that is shaping the future!

    More
  • Β· 43 views Β· 2 applications Β· 10d

    Computer Vision Engineer

    Ukraine Β· Product Β· 2 years of experience MilTech πŸͺ–
    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms. The ideal candidate will have experience with Object tracking, Visual-Inertial Odometry (VIO)...

    We are looking for a Computer Vision Engineer with a background in classical computer vision techniques and hands-on implementation of low-level CV algorithms. 

    The ideal candidate will have experience with Object tracking, Visual-Inertial Odometry (VIO) and sensor fusion. 

    We consider engineers at Middle+ and Senior levels - tasks and responsibilities will be adjusted accordingly. 

    Required Qualifications: 

    • 2+ years of hands-on experience with classical computer vision
    • Understanding of geometrical computer vision principles 
    • Hands-on experience in implementing low-level CV algorithms
    • Proficiency in C++ 
    • Experience with Linux 
    • Experience with Object tracking and detection tasks 
    • Ability to quickly navigate through recent research and trends in computer vision. 
    • Familiarity with neural networks and common CV frameworks/libraries (OpenCV, NumPy, PyTorch, ONNX, Eigen, etc.) 

    Nice to Have: 

    • Practical experience with SLAM and/or Visual-Inertial Odometry (VIO) 
    • Experience with Python 
    • Experience with sensor fusion. 
    More
  • Β· 69 views Β· 12 applications Β· 12d

    Computer Vision Lead

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the role:
    We seek an experienced AI/ML Team Leader to join our Client’s startup team. As the Technical Lead, you will start with individual technical contributions and later will take an engineering manager role for the hiring team. You will create cutting-edge end-to-end AI camera solutions, effectively engage with customers and partners to grasp their requirements and ensure project success, overseeing from inception to completion.

    Requirements:
    - Proven leadership experience with a track record of managing and developing technical teams;
    - Excellent customer-facing skills to understand and address client needs effectively;
    - Master's Degree in Computer Science or related field (PhD is a plus);
    - Solid grasp of machine learning and deep learning principles;
    - Strong experience in Computer Vision, including object detection, segmentation, tracking, keypoint/pose estimation;
    - Proven R&D mindset: capable of formulating and validating hypotheses independently, exploring novel approaches, and diving deep into model failures;
    - Proficiency in Python and deep learning frameworks;
    - Practical experience with state-of-the-art models, including different versions of YOLO and Transformer-based architectures (e.g., ViT, DETR, SAM);
    - Expertise in image and video processing using OpenCV;
    - Experience in model training, evaluation, and optimization;
    - Fluent written and verbal communication skills in English.

    Would be a plus:
    - Experience applying ML techniques to embedded or resource-constrained environments (e.g. edge devices, mobile platforms, microcontrollers);
    - Ideally, you have led projects where ML models were optimized, deployed, or fine-tuned for embedded systems, ensuring high performance and low latency under hardware limitations.

    We offer:
    - Free English classes with a native speaker and external courses compensation;
    - PE support by professional accountants;
    - 40 days of PTO;
    - Medical insurance;
    - Team-building events, conferences, meetups, and other activities;
    - There are many other benefits you’ll find out at the interview.

    More
  • Β· 126 views Β· 7 applications Β· 5d

    Senior Data Scientist

    Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re looking for a Senior Data Scientist to help shape how our clients build and scale AI solutions on AWS. In this role, you’ll develop and deploy cutting-edge generative AI models on SageMaker – from model training and fine-tuning to optimized...

    We’re looking for a Senior Data Scientist to help shape how our clients build and scale AI solutions on AWS. In this role, you’ll develop and deploy cutting-edge generative AI models on SageMaker – from model training and fine-tuning to optimized deployment – guiding customers from ideation to production through proof of concept. You’ll work closely with startup founders, technical leaders, and account teams to create scalable, high-impact AI solutions that drive real business value.

     

    Responsibilities:

    • Model Development & Deployment: Deploy and train models on AWS SageMaker (using TensorFlow/PyTorch).
    • Model Tuning & Optimization: Fine-tune and optimize models using techniques like quantization and distillation, and tools like Pruna.ai and Replicate.
    • Generative AI Solutions: Design and implement advanced GenAI solutions, including prompt engineering and retrieval-augmented generation (RAG) strategies.
    • LLM Workflows: Develop agentic LLM workflows that incorporate tool usage, memory, and reasoning for complex problem-solving.
    • Scalability & Performance: Maximize model performance on AWS’s by leveraging techniques such as model compilation, distillation, and quantization and using AWS specific features.
    • Collaboration: Work closely with Data Engineering, DevOps, and MLOps teams to integrate models into production pipelines and workflows.

     

    Requirements:

    • 4+ years of experience in machine learning or data science roles, with deep learning (NLP, LLMs) expertise.
    • Expert in Python and deep learning frameworks (PyTorch/TensorFlow), and hands-on with AWS ML services (especially SageMaker and Bedrock).
    • Proven experience with generative AI and fine-tuning large language models.
    • Strong experience deploying ML solutions on AWS cloud infrastructure and familiarity with MLOps best practices.
    • Excellent communication skills and ability to work directly with customers in a consulting capacity.
    • A master’s degree in a relevant field and AWS ML certifications are a plus.

     

    Benefits:

    • Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
    • International work environment
    • Referral program – enjoy cooperation with your colleagues and get a bonus 
    • Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
    • English classes
    • Soft skills training

       

    Country-specific benefits will be discussed during the hiring process.

     

    Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognizing the value you bring to our team.

    More
  • Β· 37 views Β· 2 applications Β· 26d

    Data Science Engineer

    Full Remote Β· Poland, Portugal, Spain Β· 5 years of experience Β· B2 - Upper Intermediate
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems.

    We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

    Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.

    We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!

     

    About the position

    Quantum is expanding the team and has brilliant opportunities for a Data Science Engineer. The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become β€œsuper-investors.”

    Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide.

     

    Must have skills:

    • At least 5 years of commercial experience in Data Science
    • Strong knowledge of linear algebra, calculus, statistics, and probability theory
    • Proficiency in algorithms and data structures
    • Experience with Machine Learning libraries (NumPy, SciPy, Pandas, Scikit-learn)
    • Experience with at least one Deep Learning framework (TensorFlow, Keras, or PyTorch)
    • Knowledge of modern Neural Network architectures
    • Experience in developing solutions with LLMs
    • Experience with Cloud Computing Platforms (AWS, Google Cloud, or Azure)
    • Practical experience with Docker
    • Experience with SQL
    • Strong understanding of Object-Oriented Programming (OOP) principles
    • Hands-on experience in building solutions for financial domain
    • At least an Upper-Intermediate level of English (spoken and written)

     

    Would be a plus:

    • Experience with MLOps solutions
    • Basic understanding of Big Data concepts
    • Experience in classical Computer Vision algorithms
    • Participation in Kaggle competitions

     

    Your tasks will include:

    • Full-cycle data science projects
    • Data analysis and data preparation
    • Development of NLP/Deep Learning / Machine Learning; Developing models and deploying them to production
    • Sometimes, this will require the ability to implement methods from scientific papers and apply them to new domains

     

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

     

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 80 views Β· 27 applications Β· 24d

    Data Scientist

    Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations. About the Role As a Data Scientist, you will: Collaborate with...

    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations.


    About the Role

    As a Data Scientist, you will:

    • Collaborate with Product Owners, Data Analysts, Data Engineers, and ML Engineers to design, develop, deploy, and monitor scalable AI/ML products.
    • Lead initiatives to integrate Generative AI into business processes.
    • Work closely with business stakeholders to understand challenges and deliver tailored data-driven solutions.
    • Monitor model performance and implement improvements.
    • Apply best practices in data science and ML for sustainable, high-quality results.
    • Develop and fine-tune models with a strong focus on accuracy and business value.
    • Leverage cutting-edge technologies to drive innovation and efficiency.
    • Stay updated on advancements in AI and data science, applying new techniques to ongoing processes.


    About the Candidate

    We are looking for a professional with strong analytical and technical expertise.


    Must have:

    • 3+ years of hands-on experience in Data Science and ML.
    • Experience with recommendation systems and prescriptive analytics.
    • Proficiency in Python, SQL, and ML libraries/frameworks.
    • Proven experience developing ML models and applying statistical methods.
    • Familiarity with containerization and orchestration tools.
    • Excellent communication skills and strong command of English.
    • Bachelor’s or Master’s degree in Computer Science, Statistics, Physics, or Mathematics.


    Nice to have:

    • Experience with Snowflake.
    • Exposure to Generative AI and large language models.
    • Knowledge of AWS services.
    • Familiarity with NLP models (including transformers).
    More
  • Β· 109 views Β· 2 applications Β· 20d

    Machine Learning engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Responsibilities: Design data science, statistical, machine learning and deep learning systems that influence millions of players Implement and optimize appropriate ML algorithms and tools for time series and tabular data Transform data science prototypes...

    Responsibilities:

    • Design data science, statistical, machine learning and deep learning systems that influence millions of players
    • Implement and optimize appropriate ML algorithms and tools for time series and tabular data
    • Transform data science prototypes into full-scale products, while deploying and monitoring ML models
    • Run live tests and experiments
    • Train and retrain systems when necessary
    • Create or extend existing ML libraries and frameworks
    • Collaborate with other scientists, engineers, architects and analysts spread across several countries

    Requirements:

    • MSc in Computer Science, or any related degree
    • Solid working experience in Python and Java - high coding standards, clean code, well documented, and extensive unit testing- Must.
    • Experience working with databases (SQL and no-SQL)
    • Experience with Scala
    • Experience with training, testing, deployment, and monitoring real-time (or near real-time) machine learning models in production
    • Experience with machine learning frameworks (like Keras, Tensorflow, or PyTorch) and libraries (like scikit-learn) - Big Advantage. 
    • Experience with Big Data tools, in particular batch and stream processing (Spark, Kafka, Hadoop, Hive, etc.) - Must 
    • Good understanding of container & orchestration technologies (Docker, Kubernetes, etc.) - Must
    • Experience working on high-scale, production-grade projects
    • All-around team player who is a self-motivated, fast learner
    More
  • Β· 62 views Β· 1 application Β· 13d

    Senior/Middle Data Scientist (Data Preparation, Pre-training)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 59 views Β· 1 application Β· 24d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    WE ARE SoftServe is a global digital solutions company, headquartered in Austin, Texas, and founded in 1993. With 2,000+ active projects across the USA, Europe, APAC, and LATAM, we deliver meaningful outcomes through bold thinking and deep expertise. Our...

    WE ARE

    SoftServe is a global digital solutions company, headquartered in Austin, Texas, and founded in 1993.

    With 2,000+ active projects across the USA, Europe, APAC, and LATAM, we deliver meaningful outcomes through bold thinking and deep expertise. Our people create impactful solutions, drive innovation, and genuinely enjoy what they do.

    The AI and Data Science Center of Excellence (CoE) is SoftServe’s premier AI/ML hub, primarily based in Europe. With 130+ expertsβ€”including data scientists, research analysts, MLOps engineers, ML and LLM architects β€” we cover the full AI lifecycle, from problem framing to deployment.

    In 2024, we delivered 150+ AI projects, including over 100 focused on Generative AI, combining scale with measurable impact.

    We are a 2024 NVIDIA Service Delivery Partner and maintain strong collaborations with Google Cloud, Amazon, and Microsoft, ensuring our teams always work with cutting-edge tools and technologies.

    We also lead Gen AI Lab β€” our internal innovation engine focused on applied research and cross-functional collaboration in Generative AI.

    In 2025, a key area of innovation is Agentic AI β€” where we design and deploy autonomous, collaborative agent systems capable of addressing complex, real-world challenges at scale for our clients and internally.


    IF YOU ARE

    • Experienced in Generative AI and natural language processing (NLP), working with large-scale transformer models and generative pre-trained LLMs like GPT-4, Claude, and Gemini
    • Knowledgeable about the latest advancements in diffusion models and other generative frameworks for text and image generation
    • Adept at applying advanced deep learning techniques to practical use cases
    • Well-versed in emerging trends and breakthroughs in machine learning, deep learning, and NLP, with a strong focus on their real-world applications
    • Proficient in working with state-of-the-art pre-trained language models like GPT-4 and BERT, including fine-tuning for specialized tasks
    • Aware of the software development lifecycle for AI projects and the operationalization of machine learning models
    • Experienced in deploying AI solutions on major cloud platforms
    • Hands-on with Python and deep learning frameworks such as TensorFlow or PyTorch
    • Skilled in interpersonal communication, analytical reasoning, and complex problem-solving
    • Capable of translating technical concepts into clear, concise insights that non-technical audiences can easily grasp
    • Proficient in business communication in English at an upper-intermediate level
       

    AND YOU WANT TO

    • Work with the full stack of data analysis, deep learning, and machine learning model pipeline that includes deep analysis of customer data, modeling, and deployment in production
    • Choose relevant computational tools for study, experiment, or trial research objectives
    • Drive the development of innovative solutions for language generation, text synthesis, and creative content generation using the latest state-of-the-art techniques
    • Develop and implement advanced Generative AI solutions such as intelligent assistants, Retrieval-Augmented Generation (RAG) systems, and other innovative applications
    • Produce clear, concise, well-organized, and error-free computer programs with the appropriate technological stack
    • Present results directly to stakeholders and gather business requirements
    • Develop expertise in state-of-the-art Generative AI techniques and methodologies
    • Grow your skill set within a dynamic and supportive environment
    • Work with Big Data solutions and advanced data tools in cloud platforms
    • Build and operationalize ML models, including data manipulation, experiment design, developing analysis plans, and generating insights
    • Lead teams of data scientists and software engineers to successful project execution


    TOGETHER WE WILL

    • Be part of a team that's shaping the future of AI and data science through innovation and shared growth.
    • Advance the frontier of Agentic AI by shaping intelligent multi-agent ecosystems that drive autonomy, scalability, and measurable business value.
    • Have access to world-class training, cutting-edge research, and collaborate with top industry partners.
    • Maintain a synergy of Data Scientists, DevOps team, and ML Engineers to build infrastructure, set up processes, productize machine learning pipelines, and integrate them into existing business environments
    • Communicate with the world-leading companies from our logos portfolio
    • Enjoy the opportunity to work with the latest modern tools and technologies on various projects
    • Participate in international events and get certifications in cutting-edge technologies
    • Have access to powerful educational and mentorship programs
    • Revolutionize the software industry and drive innovation in adaptive self-learning technologies by leveraging multidisciplinary expertise
    More
Log In or Sign Up to see all posted jobs