Jobs

94
  • Β· 62 views Β· 1 application Β· 7d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 72 views Β· 6 applications Β· 13d

    Senior/Middle Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 65 views Β· 2 applications Β· 13d

    Senior/Middle Data Scientist (Data Preparation, Pre-training)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for an experienced Senior/Middle Data Scientist with a passion for Large Language Models (LLMs) and cutting-edge AI research. In this role, you will focus on designing and prototyping data preparation pipelines, collaborating closely with data engineers to transform your prototypes into scalable production pipelines, and actively developing model training pipelines with other talented data scientists. Your work will directly shape the quality and capabilities of the models by ensuring we feed them the highest-quality, most relevant data possible.

    Requirements:
    Education & Experience:
    - 3+ years of experience in Data Science or Machine Learning, preferably with a focus on NLP.
    - Proven experience in data preprocessing, cleaning, and feature engineering for large-scale datasets of unstructured data (text, code, documents, etc.).
    - Advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.
    NLP Expertise:
    - Good knowledge of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, semantic search, text classification, sequence tagging (NER), transformers/LLMs, RAGs.
    - Familiarity with LLM training and fine-tuning techniques.
    ML & Programming Skills:
    - Proficiency in Python and common data science and NLP libraries (pandas, NumPy, scikit-learn, spaCy, NLTK, langdetect, fasttext).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.
    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.
    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP, or Azure) and big data technologies (Spark, Hadoop, Ray, Dask) for scaling data processing or model training.
    Communication & Personality:
    - Experience working in a collaborative, cross-functional environment.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.
    - Ability to rapidly prototype and iterate on ideas

    Nice to have:
    Advanced NLP/ML Techniques:
    - Familiarity with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Understanding of FineWeb2 or similar processing pipelines approach.
    Research & Community:
    - Publications in NLP/ML conferences or contributions to open-source NLP projects.
    - Active participation in the AI community or demonstrated continuous learning (e.g., Kaggle competitions, research collaborations) indicating a passion for staying at the forefront of the field.
    Domain & Language Knowledge:
    - Familiarity with the Ukrainian language and context.
    - Understanding of cultural and linguistic nuances that could inform model training and evaluation in a Ukrainian context.
    - Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    MLOps & Infrastructure:
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    - Experience in working alongside MLOps engineers to streamline the deployment and monitoring of NLP models.
    Problem-Solving:
    - Innovative mindset with the ability to approach open-ended AI problems creatively.
    - Comfort in a fast-paced R&D environment where you can adapt to new challenges, propose solutions, and drive them to implementation.

    Responsibilities:
    - Design, prototype, and validate data preparation and transformation steps for LLM training datasets, including cleaning and normalization of text, filtering of toxic content, de-duplication, de-noising, detection and deletion of personal data, etc.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Analyze large-scale raw text, code, and multimodal data sources for quality, coverage, and relevance.
    - Develop heuristics, filtering rules, and cleaning techniques to maximize training data effectiveness.
    - Collaborate with data engineers to hand over prototypes for automation and scaling.
    - Research and develop best practices and novel techniques in LLM training pipelines.
    - Monitor and evaluate data quality impact on model performance through experiments and benchmarks.
    - Research and implement best practices in large-scale dataset creation for AI/ML models.
    - Document methodologies and share insights with internal teams.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 17 views Β· 1 application Β· 18d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· C1 - Advanced
    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations. About the Role As a Data Scientist, you will: Collaborate with...

    We are looking for a Data Scientist to support a Data & AI team. The role is focused on developing scalable AI/ML solutions and integrating Generative AI into evolving business operations.


    About the Role

    As a Data Scientist, you will:

    • Collaborate with Product Owners, Data Analysts, Data Engineers, and ML Engineers to design, develop, deploy, and monitor scalable AI/ML products.
    • Lead initiatives to integrate Generative AI into business processes.
    • Work closely with business stakeholders to understand challenges and deliver tailored data-driven solutions.
    • Monitor model performance and implement improvements.
    • Apply best practices in data science and ML for sustainable, high-quality results.
    • Develop and fine-tune models with a strong focus on accuracy and business value.
    • Leverage cutting-edge technologies to drive innovation and efficiency.
    • Stay updated on advancements in AI and data science, applying new techniques to ongoing processes.


    About the Candidate

    We are looking for a professional with strong analytical and technical expertise.


    Must have:

    • 3+ years of hands-on experience in Data Science and ML.
    • Experience with recommendation systems and prescriptive analytics.
    • Proficiency in Python, SQL, and ML libraries/frameworks.
    • Proven experience developing ML models and applying statistical methods.
    • Familiarity with containerization and orchestration tools.
    • Excellent communication skills and strong command of English.
    • Bachelor’s or Master’s degree in Computer Science, Statistics, Physics, or Mathematics.


    Nice to have:

    • Experience with Snowflake.
    • Exposure to Generative AI and large language models.
    • Knowledge of AWS services.
    • Familiarity with NLP models (including transformers).
    More
  • Β· 83 views Β· 8 applications Β· 27d

    Data Science Engineer / AI Agent Systems Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions. ...

    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions.

     

    Requirements:

    - AI/ML: 2+ years hands-on with LLM APIs, production deployment of at least one AI system

    - Experience with LangChain, CrewAI, or AutoGen (one is enough)

    - Understanding of prompt engineering (Chain-of-Thought, ReAct) and tool/function calling

    - Python: 3+ years experience, strong fundamentals, Flask/FastAPI, async/await, REST APIs

    - Production Experience: built systems running in production, handled logging, testing, error handling

    - Cloud experience with AWS / GCP / Azure (one is enough)

    - Familiar with Git, CI/CD, databases (PostgreSQL/MySQL)

     

    Nice to Have:

    Experience with vector databases (Pinecone, Weaviate)

    Docker/containerization knowledge

    Fintech or financial services background

    Advanced ML/AI education or certifications

    What You’ll Work On:

    - Designing and deploying AI-powered systems using LLMs (OpenAI, Anthropic, etc.)

    - Building agent-based solutions with frameworks like LangChain, CrewAI, or AutoGen

    - Integrating AI systems with external APIs, databases, and production services

    - Writing clean, tested Python code and deploying services to the cloud

    - Collaborating with stakeholders to translate business requirements into technical solutions

    Project

    A system for automating accounting operations for companies, which reads, analyzes, compares, and interacts with accounting data. The goal is to make processes faster, more accurate, and scalable, minimize manual work, and increase client efficiency.

     

    Project stage: MVP is nearly complete; the next step is to automate the MVP and scale the product.

    More
  • Β· 79 views Β· 4 applications Β· 27d

    Data Science Engineer / AI Agent Systems Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions. ...

    We’re looking for an experienced engineer to join our team and work on building production-ready AI systems. This role is perfect for someone who enjoys combining AI/ML expertise with solid software engineering practices to deliver real-world solutions.

     

    Requirements:

    - AI/ML: 2+ years hands-on with LLM APIs, production deployment of at least one AI system

    - Experience with LangChain, CrewAI, or AutoGen (one is enough)

    - Understanding of prompt engineering (Chain-of-Thought, ReAct) and tool/function calling

    - Python: 3+ years experience, strong fundamentals, Flask/FastAPI, async/await, REST APIs

    - Production Experience: built systems running in production, handled logging, testing, error handling

    - Cloud experience with AWS / GCP / Azure (one is enough)

    - Familiar with Git, CI/CD, databases (PostgreSQL/MySQL)

     

    Nice to Have:

    Experience with vector databases (Pinecone, Weaviate)

    Docker/containerization knowledge

    Fintech or financial services background

    Advanced ML/AI education or certifications

    What You’ll Work On:

    - Designing and deploying AI-powered systems using LLMs (OpenAI, Anthropic, etc.)

    - Building agent-based solutions with frameworks like LangChain, CrewAI, or AutoGen

    - Integrating AI systems with external APIs, databases, and production services

    - Writing clean, tested Python code and deploying services to the cloud

    - Collaborating with stakeholders to translate business requirements into technical solutions

     

    Project

    A system for automating accounting operations for companies, which reads, analyzes, compares, and interacts with accounting data. The goal is to make processes faster, more accurate, and scalable, minimize manual work, and increase client efficiency.

     

    Project stage: MVP is nearly complete; the next step is to automate the MVP and scale the product.

    More
  • Β· 37 views Β· 2 applications Β· 10d

    3D Computer Vision Engineer (3D Reconstruction)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems...

    DeepX is on a mission to push the boundaries of 3D perception. We’re looking for an experienced Computer Vision Engineer  (3D Reconstruction) who lives and breathes reconstruction pipelines, thrives on geometry challenges, and wants to architect systems that redefine how machines see the world.

    This is not a plug-and-play role. You’ll be building the core reconstruction engine that powers our vision products - designing a modular, blazing-fast pipeline where algorithms can be swapped in and out like precision-tuned gears. Think COLMAP on steroids, fused with neural rendering, and optimized for scale.

        Core Tech Stack:

    • Languages: C++, Python
    • CV/3D Libraries: OpenCV, Open3D, PCL, COLMAP
    • Math/Utils: NumPy, Eigen
    • Visualization: Plotly, Matplotlib
    • Deep Learning: PyTorch, TensorFlow
    • Data: Point clouds, meshes, multi-view image sets.
       

      Desired Expertise: 

    • Core Expertise: Deep, hands-on knowledge of 3D computer vision fundamentals, including projective geometry, triangulation, transformations, and camera models.
    • Algorithm Mastery: Proven experience with point cloud and mesh processing algorithms, such as ICP for registration and refinement.
    • Development Experience: Strong software engineering skills, primarily in a Linux environment. Experience deploying applications on Windows (or via WSL) is a major plus.
    • Data Handling: Experience managing and analyzing the large datasets typical in 3D reconstruction.
      Projective Geometry Mastery: Camera models, projections, triangulation, multi-sensor fusion.
    • Transformations: Rotations, quaternions, coordinate system conversions, 3D frame manipulations.
    • SfM & MVS: Proven hands-on with pipelines and dense reconstructions.
    • SLAM: Bundle adjustment, pose graph optimization, loop closure.
    • Code Craft: Strong software engineering chops - designing modular, performant, production-grade systems.
    • Visualization: Proficiency with 3D visualization tools and libraries (e.g., OpenGL, Blender scripting) for rendering and debugging point clouds and meshes.
    • Bonus Points:
      - You’ve built a full 3D reconstruction pipeline from scratch.

      - Hands-on with Gaussian Splatting or NeRFs.

      - Experience with SuperGlue or other state-of-the-art feature matching models.

        - Hybrid reconstruction experience: fusing classical geometry      with neural methods.

       - Experience with real-time or streaming reconstruction systems.

       - Familiarity with emerging topics like 3D scene segmentation and the application of LLMs to geometric data.

    What You’ll Do:

    • End-to-End Pipeline Development: You will architect, build, and deploy a robust, high-performance 3D reconstruction pipeline from multi-view imagery. This includes owning and optimizing all core modules: feature detection, matching, camera pose estimation, SfM, dense stereo (MVS), and mesh/surface generation.
    • System Architecture: Design a highly modular and scalable system that allows for interchangeable components, facilitating rapid A/B testing between classical geometric algorithms and modern neural approaches.
    • Performance Optimization: Profile and optimize the entire pipeline for low-latency, real-time performance. This involves advanced GPU programming (CUDA/OpenCL), efficient memory management to handle large models, and leveraging modern compute frameworks.
    • Research & Integration: Stay at the forefront of academic and industry research. You will be responsible for identifying, implementing, and integrating state-of-the-art methods in SLAM, neural rendering (NeRFs, 3DGS), and hybrid geometry-neural network models.
    • Data Management: Develop solutions for handling, processing, and distributing large-scale image and 3D datasets (e.g., using tools like Rclone).

      Why Join DeepX?

      This is your chance to own a core engine at the frontier of 3D vision. You’ll be surrounded by a small but elite team, working on real-world deployments where your algorithms won’t just run in benchmarks - they’ll run in airports, mines, logistics hubs, and beyond. If you want your code to shape how machines perceive the world at scale, this is the place.

       

      Sounds like you? -> Let’s talk
      Send us your portfolio, GitHub, or projects - we love seeing real reconstructions more than polished CVs.
       

    More
  • Β· 58 views Β· 0 applications Β· 4d

    Senior Computer Vision Engineer (3D Perception)

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more ...

    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more
     

    Our client builds autonomous yard trucks that replace human drivers to make logistics yards safer, faster, and more efficient
     

    Your mission: make the AI stack see in 3D and drive correctlyβ€”every time

     

    Responsibilities:

    • Invent & build computer vision algorithms with a focus on 3D object detection and scene reconstruction
    • Own productionization: collaborate with ML/Robotics/Platform teams to ship robust, low-latency vision models
    • Push the frontier: work hands-on with modern 3D vision, Deep Learning, and RL/VLA-aware pipelines

       

    Requirements:

    • Strong background in Computer Vision and Deep Learning
    • Hands-on experience with RGB-based 3D detection and occupancy networks
    • Proficiency with PyTorch or TensorFlow
    • Experience in 3D reconstruction, point clouds, or related domains
    • Python mastery; solid problem-solving and analytical skills
    • Upper-Intermediate English (written & spoken)
    • Experience with inference optimization (e.g., quantization/acceleration) is a plus

     

    Would be a plus:

    • Knowledge or practical experience with Reinforcement Learning
    • Work with edge devices & sensors (cameras, LiDAR, radar, etc.)
    • Background in automotive or robotics
    • C++/CUDA familiarity

     

    What We Offer:

    • Real autonomy impact. Work on vehicles that use RL, Vision-Language-Action (VLA) and many other latest architectures
    • Access to DGX H200 systems and large-scale GPU clusters
    • Startup pace, research mindset, and a goal-oriented team
    • Newest technical equipment and modern MLOps
    • 20 working days of paid vacation, English courses, educational events & conferences, and medical insurance
    • Your engineering will directly influence how autonomous vehicles perceive and act
    More
  • Β· 132 views Β· 29 applications Β· 22d

    Machine Learning Engineer for an Online News Media

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are seeking a Machine Learning Engineer to enhance our data science and ML capabilities, supporting current and upcoming projects. The ideal candidate will have strong expertise in developing, deploying, and optimizing predictive models and working...

    We are seeking a Machine Learning Engineer to enhance our data science and ML capabilities, supporting current and upcoming projects. The ideal candidate will have strong expertise in developing, deploying, and optimizing predictive models and working with modern ML platforms in a cloud environment. You will collaborate closely with product and engineering teams to deliver data-driven solutions that directly impact business outcomes.

    Benefits:

    • Opportunity to work with cutting-edge ML platforms and cloud technologies
    • Supportive and collaborative international team
    • Flexible working schedule
    • Fully remote work with the ability to work from anywhere

     

    Requirements:

    • Proven experience in building and optimizing machine learning models for predictive analytics, regression, forecasting, and categorization tasks.
    • Strong proficiency in Python, including libraries such as pandas, NumPy, scikit-learn, TensorFlow, or PyTorch, for end-to-end model development and deployment.
    • Hands-on expertise with cloud ML platforms like Google AutoML or AWS SageMaker to accelerate and scale model training.
    • Solid knowledge of data processing with tools such as SQL, BigQuery, or Spark, including data wrangling, feature engineering, and managing large-scale workflows.
    • Practical understanding of model lifecycle management, including version control, monitoring, retraining, and performance optimization.
    • Experience integrating ML solutions into cloud-based data pipelines and APIs, ensuring production-level reliability.
    • Familiarity with MLOps best practices, including CI/CD workflows for ML, containerization with Docker, and orchestration with Kubernetes.
    • Strong background in statistical modeling, experimentation, and A/B testing to validate and continuously improve model performance.

     

    Responsibilities:

    • Develop, evaluate, and optimize ML models for predictive analytics, forecasting, and categorization tasks
    • Build and maintain scalable data pipelines and integrate ML solutions into production environments
    • Apply feature engineering, data wrangling, and statistical modeling techniques to improve model performance
    • Manage model lifecycle (monitoring, retraining, versioning) and ensure scalability
    • Collaborate with product managers, engineers, and analysts to align business requirements with technical solutions
    • Participate in A/B testing and experimentation to validate performance of ML models
    • Contribute to MLOps best practices and improve CI/CD workflows for ML projects

    About client:

    A multifaceted digital media company dedicated to helping citizens, consumers, business leaders, and policy officials make important decisions in their lives. The company publishes independent reporting, rankings, data journalism, and advice that has earned the trust of readers and users for nearly 90 years. They are an American media company that publishes news, consumer advice, rankings, and analysis. It was launched in 1948 as the merger of a domestic-focused weekly newspaper and an international-focused weekly magazine. In 1995, the company launched its website, and in 2010 the magazine ceased printing. They reach more than 40 million people monthly during moments when they are most in need of expert advice and are motivated to act on that advice directly on their platforms.

    Industry:

    Online Media

    Location:

    United States

    About project:

    An American media company that publishes news, opinion, consumer advice, rankings, and analysis. Founded as a news magazine in 1933, it transitioned to primarily web-based publishing in 2010, although it still publishes its rankings. It covers politics, education, health, money, careers, travel, technology, and cars.

    More
  • Β· 49 views Β· 15 applications Β· 14d

    Middle Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).
    More
  • Β· 21 views Β· 1 application Β· 13d

    Senior Data Scientist/NLP Lead

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.
     

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.
     

    About the role:
    We are looking for an experienced Senior Data Scientist / NLP Lead to spearhead the development of cutting-edge natural language processing solutions for the Ukrainian LLM project. You will lead the NLP team in designing, implementing, and deploying large-scale language models and NLP algorithms that power the products.

    This role is critical to the mission of advancing AI in the Ukrainian language context, and offers the opportunity to drive technical decisions, mentor a team of data scientists, and shape the future of AI capabilities in Ukraine.
     

    Requirements:
    Education & Experience:
    - 5+ years of experience in data science or machine learning, with a strong focus on NLP.
    - Proven track record of developing and deploying NLP or ML models at scale in production environments.
    - An advanced degree (Master’s or PhD) in Computer Science, Computational Linguistics, Machine Learning, or a related field is highly preferred.

    NLP Expertise:
    - Deep understanding of natural language processing techniques and algorithms.
    - Hands-on experience with modern NLP approaches, including embedding models, text classification, sequence tagging (NER), and transformers/LLMs.
    - Deep understanding of transformer architectures and knowledge of LLM training and fine-tuning techniques, hands-on experience developing solutions on LLM, and knowledge of linguistic nuances in Ukrainian or other languages.

    Advanced NLP/ML Techniques:
    - Experience with evaluation metrics for language models (perplexity, BLEU, ROUGE, etc.) and with techniques for model optimization (quantization, knowledge distillation) to improve efficiency.
    - Background in information retrieval or RAG (Retrieval-Augmented Generation) is a plus for building systems that augment LLMs with external knowledge.
    ML & Programming Skills:
    - Proficiency in Python and common data science libraries (pandas, NumPy, scikit-learn).
    - Strong experience with deep learning frameworks such as PyTorch or TensorFlow for building NLP models.
    - Ability to write efficient, clean code and debug complex model issues.

    Data & Analytics:
    - Solid understanding of data analytics and statistics.
    - Experience in experimental design, A/B testing, and statistical hypothesis testing to evaluate model performance.
    - Experience on how to build a representative benchmarking framework given business requirements for LLM.
    - Comfortable working with large datasets, writing complex SQL queries, and using data visualization to inform decisions.

    Deployment & Tools:
    - Experience deploying machine learning models in production (e.g., using REST APIs or batch pipelines) and integrating with real-world applications.
    - Familiarity with MLOps concepts and tools (version control for models/data, CI/CD for ML).
    - Experience with cloud platforms (AWS, GCP or Azure) and big data technologies (Spark, Hadoop) for scaling data processing or model training is a plus.
    - Hands-on experience with containerization (Docker) and orchestration (Kubernetes) for ML, as well as ML workflow tools (MLflow, Airflow).
    Leadership & Communication:
    - Demonstrated ability to lead technical projects and mentor junior team members.
    - Strong communication skills to convey complex ML results to non-technical stakeholders and to document methodologies clearly.

     

    Responsibilities:
    - Lead end-to-end development of NLP and LLM models - from data exploration and model prototyping to validation and production deployment. This includes designing novel model architectures or fine-tuning state-of-the-art transformer models (e.g., BERT, GPT) to solve project-specific language tasks.
    - Analyze large text datasets (Ukrainian and multilingual corpora) to extract insights and build robust training datasets.
    - Guide data collection and annotation efforts to ensure high-quality data for model training.
    - Develop and implement NLP algorithms for a range of tasks such as text classification, named entity recognition, semantic search, and conversational AI.
    - Stay up-to-date with the latest research to apply transformer-based models, embeddings, and other modern NLP techniques in the solutions.
    - Establish evaluation metrics and validation frameworks for model performance, including accuracy, factuality, and bias.
    - Design A/B tests and statistical experiments to compare model variants and validate improvements.
    - Deploy and integrate NLP models into production systems in collaboration with engineers - ensuring models are scalable, efficient, and well-monitored in a real-world setting.
    - Optimize model inference and troubleshoot issues such as model drift or data pipeline bottlenecks.
    - Provide technical leadership and mentorship to the NLP/ML team.
    - Review code and research, uphold best practices in ML (version control, reproducibility, documentation), and foster a culture of continuous learning and innovation.
    - Collaborate cross-functionally with product managers, software engineers, and MLOps engineers to align NLP solutions with product goals and infrastructure capabilities.
    - Communicate complex data science concepts to stakeholders and incorporate their feedback into model development.

     

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 40 views Β· 2 applications Β· 13d

    Reinforcement Learning Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more ...

    DataRoot Labs is a full-cycle AI R&D centerβ€”and Ukraine’s largest hub of AI talent and compute. Over the past 10+ years, we’ve shipped 45+ AI-powered products for leaders and startups across Automotive, Energy, Navigation, Gaming, Education, and more
     

    Our client builds autonomous yard trucks that replace human drivers to make logistics yards safer, faster, and more efficient
     

    Your mission: make the AI stack see in 3D and drive correctlyβ€”every time

    Responsibilities:

    • Research and develop advanced reinforcement learning (RL) algorithms and their applications
    • Build, simulate, and test RL environments for real-world deployment scenarios
    • Apply modern deep learning techniques, including transformers and computer vision, to RL problems
    • Optimize inference performance for large-scale RL solutions

     

    Requirements:

    • Have a strong background in Reinforcement Learning and Deep Learning
    • Hands-on experience with simulation and environments for RL
    • Proven production experience in RL, including deployment cases
    • Deep understanding of RL techniques (e.g., policy gradients, actor-critic, Q-learning)
    • Experience with transformers and computer vision
    • Upper-Intermediate English (written & spoken) for documentation and collaboration
    • Strong knowledge of Python
    • Experience with inference optimization

     

    Would be a plus:

    • Experience with Vision-Language Architectures (VLAs)
    • Work with edge devices and sensors (e.g., camera, lidar, radar, etc.)
    • Experience in automotive or robotics
    • Knowledge of C++/CUDA

     

    What We Offer:

    • Real autonomy impact. Work on vehicles that use RL, Vision-Language-Action (VLA) and many other latest architectures
    • Access to DGX H200 systems and large-scale GPU clusters
    • Startup pace, research mindset, and a goal-oriented team
    • Newest technical equipment and modern MLOps
    • 20 working days of paid vacation, English courses, educational events & conferences, and medical insurance
    • Your engineering will directly influence how autonomous vehicles perceive and act
    More
  • Β· 43 views Β· 1 application Β· 13d

    Data Science Team Lead

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI solutions that drive real results....

    Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI solutions that drive real results. We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.

    We’re looking for a hands-on Data Science Team Lead to build and scale production AI/ML solutions on AWS while leading a Data Scientist team. You’ll own the full lifecycle β€”from discovery and proof-of-concepts to training, optimization, deployment, and iteration in production β€” partnering closely with customers and cross-functional teams.

     

    If you are interested in this opportunity, please submit your CV in English.

     

    Responsibilities

    • People leadership: Manage and coach a team of Data Scientists, set clear goals, run 1:1s, and support career growth and technical excellence.
    • Delivery ownership: Drive project scoping, planning, and on-time, high-quality delivery from inception to production. Proactively remove blockers, manage risks, and communicate status to stakeholders.
    • Customer engagement: Work directly with founders/technical leaders to understand goals, translate them into feasible AI roadmaps, and ensure measurable business outcomes.
    • Model Development & Deployment: Deploy and train models on AWS SageMaker (using TensorFlow/PyTorch).
    • Model Tuning & Optimization: Fine-tune and optimize models using techniques like quantization and distillation, and tools like Pruna.ai and Replicate.
    • Generative AI Solutions: Design and implement advanced GenAI solutions, including prompt engineering and retrieval-augmented generation (RAG) strategies.
    • LLM Workflows: Develop agentic LLM workflows that incorporate tool usage, memory, and reasoning for complex problem-solving.
    • Scalability & Performance: Maximize model performance on AWS’s by leveraging techniques such as model compilation, distillation, and quantization and using AWS specific features.
    • Collaboration: Work closely with other teams (Data Engineering, DevOps, MLOps, Solution Architects, Sales teams) to integrate models into production pipelines and workflows.

       

    Requirements

    • Team management experience: 2+ years leading engineers or scientists (people or tech lead).
    • Technical experience: 5–6+ years in Data Science/ML (including deep learning/LLMs).
    • Excellent customer-facing skills to understand and address client needs effectively.
    • Expert in Python and deep learning frameworks (PyTorch/TensorFlow), and hands-on with AWS ML services (especially SageMaker and Bedrock).
    • Proven experience with generative AI and fine-tuning large language models.
    • Experience deploying ML solutions on AWS cloud infrastructure and familiarity with MLOps best practices.
    • Fluent written and verbal communication skills in English.
    • A master’s degree in a relevant field and AWS ML certifications are a plus.

       

    Benefits

    • Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
    • International work environment
    • Referral program – enjoy cooperation with your colleagues and get a bonus 
    • Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
    • English classes
    • Soft skills training

       

    Country-specific benefits will be discussed during the hiring process.

    Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognizing the value you bring to our team.

    More
  • Β· 35 views Β· 1 application Β· 28d

    Data Scientist/AI

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    We are now looking for a Data Scientist with LLM usage background to join our development team and help build innovative solutions in medical tourism. Your Role: As a Data Scientist, you will: Extract medical entities and patterns from large volumes of...

    We are now looking for a Data Scientist with LLM usage background to join our development team and help build innovative solutions in medical tourism.
     

    Your Role:

    As a Data Scientist, you will:

    • Extract medical entities and patterns from large volumes of text documents including medical reports, patient histories, and clinical notes using NLP techniques (NER, entity linking) to identify diagnoses, procedures, medications, and symptoms for automated document processing and intelligent search capabilities.
    • Implement clustering and similarity detection algorithms to group similar medical cases, treatment patterns, and patient profiles, enabling better matching between patients and healthcare providers through unsupervised learning methods (K-means, DBSCAN, hierarchical clustering) and semantic similarity measures.
    • Engineer data architecture for new chatbot features and integrate them with existing tools like CRM, databases, and analytics services.
    • Optimize AI solutions to enhance system speed, stability, and quality, including refactoring and improving existing functionalities, while incorporating state-of-the-art AI methodologies and frameworks.
    • Build personalized offer-generation services by aggregating data from forms, chats, attached files, clinic price lists, and historical databases to generate real-time, patient-specific proposals.
    • Work with databases (MySQL, PostgreSQL, MongoDB) to design scalable structures, write efficient queries, and ensure data security and integrity.
    • Test, refactor, and maintain code while documenting key decisions and ensuring clean, maintainable systems.
    • Support our chatbot assistant by designing and implementing advanced AI-driven modules (multi-layered task decomposition for optimal results), leveraging Natural Language Processing + LLM services to enhance conversational capabilities and user experience.
    • Enhance business processes by brainstorming with Sales and Product teams, suggesting improvements, and exploring new AI/ML advancements (e.g., OpenAI, Lama, other ML tools) to integrate novel solutions into existing workflows.
       

    What You Need:

    • 4+ years of experience in data science or related fields.
    • Strong expertise in Python (preferably with FastAPI).
    • A solid understanding of modern big data analysis approaches, clustering, and correlation searches.
    • Prompt engineering (OpenAI / Claude models)
    • Database architecture skills
    • Proficiency in MySQL and MongoDB
    • Hands-on experience with AI tools for tasks like text/image analysis and content generation.

    What We Offer

    • Competitive salary aligned with market standards.
    • Engaging projects in the medical tourism and travel industries.
    • A collaborative environment of Python and PHP engineers, frontend developers, product managers, and medical coordinators.
    • Comprehensive benefits package, including:
      • 22 paid vacation days annually.
      • Medical insurance.
      • Compensation for sports activities.
      • Remote work flexibility with an 8-hour workday starting between 8–10 AM.
    • Opportunities to grow your skills with regular updates to your compensation.
    • The chance to impact a meaningful product that transforms lives.
    More
  • Β· 30 views Β· 1 application Β· 27d

    Data Science Engineer / AI Agent Systems Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re building production-ready AI solutions and looking for an engineer who can help us make our systems scalable, reliable, and intelligent. If you love combining AI/ML expertise with strong software engineering practices - this role is for you. What...

    We’re building production-ready AI solutions and looking for an engineer who can help us make our systems scalable, reliable, and intelligent. If you love combining AI/ML expertise with strong software engineering practices - this role is for you.

     

    What You’ll Work On:
    - Designing and deploying AI-powered systems using LLMs (OpenAI, Anthropic, etc.)
    - Building agent-based solutions with frameworks like LangChain, CrewAI, or AutoGen
    - Integrating AI systems with external APIs, databases, and production services
    - Writing clean, tested Python code and deploying services to the cloud
    - Collaborating with stakeholders to translate business requirements into technical solutions

     

    Must-Have:
    - AI/ML: 2+ years hands-on, with at least one AI system deployed in production
    - LLM stack: LangChain / CrewAI / AutoGen (one is enough)
    - Prompt engineering: Chain-of-Thought, ReAct, tool/function calling
    - Python: 3+ years, strong fundamentals, Flask/FastAPI, async/await, REST APIs
    - Production experience: logging, testing, error handling, and running systems in production
    - Cloud: AWS / GCP / Azure (any one is fine)
    - Engineering culture: Git, CI/CD, databases (PostgreSQL / MySQL)

     

    Nice-to-Have:
    - Vector databases (Pinecone, Weaviate)
    - Docker / containerization
    - Background in Fintech or financial services
    - Advanced ML/AI education or certifications

     

    The Project:
    We’re building a system to automate accounting operations for companies.
    It reads, analyzes, compares, and interacts with accounting data.
    The goal: faster, more accurate, and scalable processes, reducing manual work and boosting client efficiency.
    Current stage: MVP is nearly complete. The next step is automating the MVP and scaling the product.

    More
Log In or Sign Up to see all posted jobs