Senior Artificial Intelligence Specialist
Why this role exists:
We are looking for an AI / Generative AI Engineer to join our AI team and contribute to LLM-based systems used in production.
In this role, you will work on Generative AI solutions such as chatbots, AI assistants, MCP-based services (Model Context Protocol servers), and other LLM-powered components. These systems support knowledge-driven workflows, AI-assisted interactions, and AI automation across the platform.
You will work within an existing ML and data platform, contributing to systems that are practical to operate, observable, and reliable. The role is hands-on and engineering-focused, with guidance from senior engineers and clear ownership boundaries.
This is not a research role and not limited to prompt writing. The focus is on building, integrating, and operating GenAI systems as part of real products and internal tools.
What youβll drive:
- Implement and maintain LLM-based features and services under guidance;
- Build and improve chatbots and conversational AI systems with predictable behavior;
- Contribute to AI-assisted and AI-automated workflows that reduce manual effort;
- Work on RAG pipelines, including document ingestion, embeddings, retrieval, and generation;
- Implement parts of agent-style workflows (tool usage, step orchestration);
- Integrate GenAI components with existing backend services and APIs;
- Help monitor system behavior, performance, and basic quality metrics;
- Participate in debugging issues and supporting systems in production;
- Collaborate with Product, ML Engineering, and Backend teams;
Follow team standards for code quality, testing, and documentation.
What makes you a GR8 fit:
- Solid background in software engineering;
- Hands-on experience with LLM-based systems or applications;
- Practical experience with: Large Language Models, RAG architectures and embedding-based retrieval, Chatbot or conversational system development, tool/function calling or structured LLM outputs;
- Familiarity with context or tool-serving patterns (e.g. MCP or similar concepts);
- Experience contributing to production systems;
- Basic understanding of performance, latency, and cost considerations;
- Experience working with cloud infrastructure (preferably AWS);
- Familiarity with evaluation or monitoring of AI systems;
Strong engineering fundamentals (clean code, testing, debugging).
Tech Stack:
Experience with most tools listed below is expected, and openness to adopting new tools is part of the role.
Languages
- Python
- SQL (basic)
LLMs & GenAI
- Amazon Bedrock
- Third-party LLM APIs (e.g. OpenAI, Anthropic)
- Open-source models (LLaMA, Mistral/Mixtral, Qwen, or similar)
- Hugging Face ecosystem
Frameworks & Orchestration
- LangChain
- LangGraph
- Pydantic
- Langfuse
RAG & Retrieval
- Embeddings
- Vector stores: Qdrant, FAISS, OpenSearch, pgvector
Cloud & Infrastructure (AWS-first)
- AWS (ECS / EKS / Lambda)
- S3, OpenSearch
- API Gateway
Dev & Ops
- Docker
- Git
- CI/CD
Required languages
| English | B2 - Upper Intermediate |
| Ukrainian | Native |