Matoffo

Joined in 2021
11% answers

Matoffo is a cloud-native company that envisions cloud computing as the cornerstone of technological advancement. Our team comprises highly skilled engineers specializing in cloud solutions. As an officially recognized AWS Advanced Tier Services Partner, we excel in developing scalable cloud-native applications, offering AI, Cloud, DevOps, Data & Software Engineering services.

  • · 28 views · 1 application · 5d

    Machine Learning Engineer

    Part-time · Full Remote · Countries of Europe or Ukraine · 3 years of experience · B2 - Upper Intermediate
    Responsibilities Model Fine-Tuning and Deployment: Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock. RAG Workflows: Establish Retrieval-Augmented Generation (RAG) workflows that...

    Responsibilities

     

    Model Fine-Tuning and Deployment:

    Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock.

    RAG Workflows:

    Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.

    MLOps Integration:

    The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.

    Scalable and Customizable Solutions:

    Ensure that both the template and ingestion pipelines are scalable, allowing for adjustments to meet specific customer needs and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.

    End-to-End Workflow Automation:

    Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services like Bedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.

    Advanced Monitoring and Analytics:

    Integrated with AWS CloudWatch, QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.

    Model Monitoring and Maintenance:

    Implement model monitoring to track performance metrics and trigger retraining as necessary.

    Collaboration:

    Work closely with data engineers and DevOps engineers to ensure seamless integration of models into the production pipeline.

    Documentation:

    Document model development processes, deployment procedures, and monitoring setups for knowledge sharing and future reference.

     

    Must-Have Skills

     

    Machine Learning: Strong experience with machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

    MLOps Tools: Proficiency with Amazon SageMaker for model training, deployment, and monitoring.

    Document processing: Experience with document processing for Word, PDF, images.

    OCR: Experience with OCR tools like Tesseract / AWS Textract (preferred)

    Programming: Proficiency in Python, including libraries such as Pandas, NumPy, and Scikit-Learn.

    Model Deployment: Experience with deploying and managing machine learning models in production environments.

    Version Control: Familiarity with version control systems like Git.

    Automation: Experience with automating ML workflows using tools like AWS Step Functions or Apache Airflow.

    Agile Methodologies: Experience working in Agile environments using tools like Jira and Confluence.

     

    Nice-to-Have Skills

     

    LLM: Experience with LLM / GenAI models, LLM Services (Bedrock or OpenAI), LLM abstraction like (Dify, Langchain, FlowiseAI), agent frameworks, rag.

    Deep Learning: Experience with deep learning models and techniques.

    Data Engineering: Basic understanding of data pipelines and ETL processes.

    Containerization: Experience with Docker and Kubernetes (EKS).

    Serverless Architectures: Experience with AWS Lambda and Step Functions.

    Rule engine frameworks: Like Drools or similar

     

    If you are a motivated individual with a passion for ML and a desire to contribute to a dynamic team environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of infrastructure and driving innovation in software delivery processes.

    More
  • · 35 views · 8 applications · 5d

    ML Engineer

    Hybrid Remote · Ukraine · 5 years of experience · B2 - Upper Intermediate
    Description As an ML Engineer, your role will be participating in field service accelerator development. The Field Service Engagement Accelerator is AI-driven solution designed to enhance customer interactions and service delivery in field service...

    Description
    As an ML Engineer, your role will be participating in field service 
    accelerator development. The Field Service Engagement Accelerator is AI-driven solution designed to enhance customer interactions and service delivery in field service operations. This  accelerator leverages the latest advancements in machine learning, data processing, and cloud infrastructure to provide a customizable, scalable platform that can be rapidly deployed across multiple customers. This is greenfield project.

    Key responsibilities include:
    -Template Pipeline Development: Design and implement a flexible template 
    pipeline that can be customized and deployed per customer. This pipeline will manage user input, integrating with LLMs using AWS Textract, AWS Connect, and agentic workflows.
    -Building Data APIs: Develop robust APIs that interface with user engagement channels, capturing and processing inputs via AWS Textract and AWS Connect.
    -Ingestion Pipeline Creation: Design and implement metadata-driven template ingestion pipeline that automates the loading of public data both web and documents (word, pdfs). This pipeline will support the creation of a multimodal knowledge base, which will be critical for customer-specific deployments.
    -MLOps Integration: The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.
    -Real-Time Engagement: With AWS services such as API Gateway, Lambda, and WebSocket, the accelerator supports real-time updates and asynchronous streaming, enhancing user engagement and reducing the risk of abandonment during interactions.
    -Advanced Monitoring and Analytics: Integrated with AWS CloudWatch, 
    QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.
    -RAG Workflows: Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.
    Scalable and Customizable Solutions: Ensure that both the template and 
    ingestion pipelines are scalable, allowing for adjustments to meet specific customerneeds and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.
    -End-to-End Workflow Automation: Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services likeBedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.
     

    Skills
    Must have:
    -Data engineering background is a must.
    -Metadata driven data processing workflows (must have for lead, nice to have for developer)
    -ML engineering background is a must, experience building ML pipelines, 
    packaging and optimizing models, fine-tuning, packaging, serving.
    -Strong expertise in AWS data and ML services
    -Strong experience building data pipelines with Python

    Nice to have:
    -Experience or at least non-commercial experience with LLMs and agents.
    -Langchain or similar
    -AWS Bedrock
    -AWS Bedrock knowledge bases
    -Textract experience or any OCR experience related to document processing

    More
  • · 57 views · 12 applications · 4d

    Robotic Engineer (Raspberry Pi, Python/AI) – MVP Development

    Hybrid Remote · Ukraine · Product · 5 years of experience
    About the Role We are looking for a hands-on Robotic Engineer with strong expertise in Raspberry Pi, embedded systems, and hardware prototyping to build an MVP for a cloud-connected robotic system. The ideal candidate is comfortable working across...

    About the Role

    We are looking for a hands-on Robotic Engineer with strong expertise in Raspberry Pi, embedded systems, and hardware prototyping to build an MVP for a cloud-connected robotic system. The ideal candidate is comfortable working across electronics, sensors, actuators, and programming as well as has enough Python/AI knowledge to integrate the device with external cloud-based AI services (e.g., AWS, custom inference endpoints).

    This role is ideal for an engineer who enjoys building physical systems quickly, iterating in a startup-like environment, and creating robust prototypes that can evolve into production-ready solutions.

     

    Responsibilities

    - Hardware & Robotics Development

    - Design and assemble a functional MVP of a lightweight robotic device using Raspberry Pi, microcontrollers, sensors, actuators, and mechanical components.

    - Select, integrate, and test appropriate hardware modules (cameras, motors, servos, power systems, communication modules, etc.).

    - Build reliable control logic for movement, sensing, and feedback loops.

    - Create wiring diagrams, hardware documentation, and basic mechanical designs.

     

    Software Development

    - Develop control scripts on Raspberry Pi using Python.

    - Implement communication protocols.

    - Integrate external AI capabilities such as vision models, speech interfaces, or decision engines.

     

    Cloud & AI Integration (Good to Have)

    - Connect robotic control logic to cloud AI services (AWS AI/ML services, custom inference endpoints).

    - Enable remote operation, monitoring, logging, and updates for the robotic device.

     

    Requirements

    - Proven experience building robotics prototypes or hardware MVPs.

    - Strong expertise with Raspberry Pi, embedded Linux, and peripheral integration.

    - Solid understanding of electronics: wiring, power, sensors, actuators, controllers.

    - Proficiency in Python (hardware interaction, automation scripts, API integration).

    - Experience with real-time control systems and debugging hardware-software interactions.

    - Ability to move quickly, test hypotheses, and deliver working prototypes under time constraints.

     

    Good-to-Have Skills

    - Familiarity with cloud platforms (preferably AWS) and IoT frameworks.

    - Experience integrating AI/ML capabilities (vision, audio, LLMs, robotics frameworks).

    - Basic mechanical engineering or CAD design (Fusion 360, SolidWorks, etc.).

    - Knowledge of ROS (Robot Operating System) or similar middleware.

    - Understanding of edge AI and optimization techniques.

     

    Soft Skills

    - Independent, proactive, and capable of taking ownership from concept to MVP.

    - Comfortable working with ambiguity and iterating on prototypes.

    - Strong communication skills for documenting and presenting progress.

     

    What We Offer

    - Opportunity to build a real robotic product from scratch.

    - Collaborative environment with experts in cloud, AI, and software engineering.

    - Flexible work arrangement and competitive compensation.

    - Potential for long-term engagement on future iterations and scaling.

    More
Log In or Sign Up to see all posted jobs