Machine Learning Engineer

Responsibilities

 

Model Fine-Tuning and Deployment:

Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock.

RAG Workflows:

Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.

MLOps Integration:

The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.

Scalable and Customizable Solutions:

Ensure that both the template and ingestion pipelines are scalable, allowing for adjustments to meet specific customer needs and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.

End-to-End Workflow Automation:

Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services like Bedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.

Advanced Monitoring and Analytics:

Integrated with AWS CloudWatch, QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.

Model Monitoring and Maintenance:

Implement model monitoring to track performance metrics and trigger retraining as necessary.

Collaboration:

Work closely with data engineers and DevOps engineers to ensure seamless integration of models into the production pipeline.

Documentation:

Document model development processes, deployment procedures, and monitoring setups for knowledge sharing and future reference.

 

Must-Have Skills

 

Machine Learning: Strong experience with machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

MLOps Tools: Proficiency with Amazon SageMaker for model training, deployment, and monitoring.

Document processing: Experience with document processing for Word, PDF, images.

OCR: Experience with OCR tools like Tesseract / AWS Textract (preferred)

Programming: Proficiency in Python, including libraries such as Pandas, NumPy, and Scikit-Learn.

Model Deployment: Experience with deploying and managing machine learning models in production environments.

Version Control: Familiarity with version control systems like Git.

Automation: Experience with automating ML workflows using tools like AWS Step Functions or Apache Airflow.

Agile Methodologies: Experience working in Agile environments using tools like Jira and Confluence.

 

Nice-to-Have Skills

 

LLM: Experience with LLM / GenAI models, LLM Services (Bedrock or OpenAI), LLM abstraction like (Dify, Langchain, FlowiseAI), agent frameworks, rag.

Deep Learning: Experience with deep learning models and techniques.

Data Engineering: Basic understanding of data pipelines and ETL processes.

Containerization: Experience with Docker and Kubernetes (EKS).

Serverless Architectures: Experience with AWS Lambda and Step Functions.

Rule engine frameworks: Like Drools or similar

 

If you are a motivated individual with a passion for ML and a desire to contribute to a dynamic team environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of infrastructure and driving innovation in software delivery processes.

186 views
·
11 applications
91% read
·
82% responded
Last responded 2 days ago
19 views
·
3 applications
100% read
·
67% responded
Last responded 2 days ago
To apply for this and other jobs on Djinni login or signup.