Senior Full-Stack AI Engineer (LLM Infrastructure and Systems) Offline
TLS
Verified Employer
We are looking for a Full-Stack AI Engineer to help us build, deploy, and operate AI-powered systems — from infrastructure and backend services to LLM/VLM integration in production.
Our product focuses on home monitoring and smart-home safety (water leaks, appliance failures, HVAC issues, abnormal humidity, and other environmental risks).
Fully remote role in a strong engineering team.
What You Will Do
- Deploy and operate LLM/VLM models (cloud and on-prem).
- Configure inference services with a focus on stability and performance.
- Build and maintain RAG pipelines and AI agents.
- Work with modern AI frameworks (LangChain, LlamaIndex, LangGraph, CrewAI, Autogen).
- Develop backend services in Python and integrate AI into APIs.
- Configure and support Linux servers, networking, and reverse proxies (e.g. Nginx).
- Containerize services with Docker and support CI/CD workflows.
- Monitor, debug, and improve reliability of AI and backend systems.
Requirements
- Experience working with LLM-based systems in practice.
- Strong Python skills.
- Hands-on experience with:
- AI agents or RAG systems
- Docker, Linux, Git
- General understanding of:
- Machine learning fundamentals
- Model fine-tuning and evaluation
- Basic cybersecurity principles
Kubernetes, advanced DevOps tools, and IoT experience are a plus, not required.
Nice to Have
- Experience with IoT or smart-home systems.
- Familiarity with sensor-based monitoring.
- Knowledge of Z-Wave or ZigBee.
What We Offer
- Fully remote work with flexible hours.
- Modern AI stack and real production challenges.
- Opportunity to influence AI infrastructure and product direction.
- Long-term collaboration in a well-organized engineering team.
Required languages
| English | B2 - Upper Intermediate |
The job ad is no longer active
Look at the current jobs ML / AI →
📊
$4000-6000
Average salary range of similar jobs in
analytics →
Loading...