GenAI Consultant
EPAM GenAI Consultants are changemakers who bridge strategy and technology—applying agentic intelligence, RAG, and multimodal AI to transform how enterprises operate, serve users, and make decisions.
Preferred Tech stack
Programming Languages
- Python (*)
- TypeScript
- Rust
- Mojo
- Go
Fine-Tuning & Optimization
- LoRA (Low-Rank Adaptation)
- PEFT (Parameter-Efficient Fine-Tuning)
Foundation & Open Models
- OpenAI (GPT series), Anthropic Claude Family, Google Gemini, Grok (*, at least one of them )
- Llama
- Falcon
- Mistral
Inference Engines
- VLLM
Prompting & Reasoning Paradigms (*)
- CoT (Chain of Thought)
- ToT (Tree of Thought)
- ReAct (Reasoning + Acting)
- DSPy
Multimodal AI Models
- CLIP (*)
- BLIP2
- Whisper
- LLaVA
- SAM (Segment Anything Model)
Retrieval-Augmented Generation (RAG)
- RAG (core concept) (*)
- RAGAS (RAG evaluation and scoring) (*)
- Haystack (RAG orchestration & experimentation)
- LangChain Evaluation (LCEL Eval)
Agentic Frameworks
- CrewAI (*)
- AutoGen, AutoGPT, LangGraph, Semantic Kernel, LangChain (* at least 2 of them)
- Prompt Tools: PromptLayer, PromptFlow (Azure), Guidance by Microsoft (* at least one of them)
Evaluation & Observability
- RAGAS – Quality metrics for RAG (faithfulness, context precision, etc.) (*)
- TruLens – LLM eval with attribution and trace inspection (*)
- EvalGAI – GenAI evaluation testbench
- Giskard – Bias and robustness testing for NLP
- Helicone – Real-time tracing and logging for LLM apps
- HumanEval – Code generation correctness testing
- OpenRAI – Evaluation agent orchestration
- PromptBench – Prompt engineering comparison
- Phoenix by Arize AI – Multimodal and LLM observability
- Zeno – Human-in-the-loop LLM evaluation platform
- LangSmith – LangChain observability and evaluation
- WhyLabs – Data drift and model behavior monitoring
Explainability & Interpretability (understanding)
- SHAP
- LIME
Orchestration & Experimentation (*)
- MLflow
- Airflow
- Weights & Biases (W&B)
- LangSmith
Infrastructure & Deployment
- Kubernetes
- Amazon SageMaker
- Microsoft Azure AI
- Goggle Vertex AI
- Docker
- Ray Serve (for distributed model serving)
Responsibilities
- Lead GenAI discovery workshops with clients
- Design Retrieval-Augmented Generation (RAG) systems and agentic workflows
- Deliver PoCs and MVPs using LangChain, LangGraph, CrewAI , Semantic Kernel, DSPy, RAGAS
- Ensure Responsible AI principles in deployments (bias, fairness, explainability)
- Support RFPs, technical demos, and GenAI architecture narratives
- Reuse of accelerators/templates for faster delivery
- Governance & compliance setup for enterprise-scale AI
- Use of evaluation frameworks to close feedback loops
Requirements
- Consulting: Experience in exploring the business problem and converting it to applied AI technical solutions; expertise in pre-sales, solution definition activities
- Data Science: 3+ years of hands-on experience with core Data Science, as well as knowledge of one of the advanced Data Science and AI domains (Computer Vision, NLP, Advanced Analytics etc.)
- Engineering: Experience delivering applied AI from concept to production, familiarity, and experience with MLOps, Data, design of Data Analytics platforms, data engineering, and technical leadership
- Leadership: Track record of delivering complex AI-empowered and/or AI-empowering programs to clients in a leadership position. Experience in managing and growing a team to scale up Data Science, AI, and ML capabilities is a big plus.
- Excellent communication skills (active listening, writing and presentation), drive for problem solving and creative solutions, high EQ
- Experience with LLMOps or GenAIOps tooling (e.g., guardrails, tracing, prompt tuning workflows)
- Understanding of the importance of AI products evaluation is a must
- Knowledge of cloud GenAI platforms (AWS Bedrock, Azure OpenAI, GCP Vertex AI)
- Understanding of data privacy, compliance, and Governance in GenAI (GDPR, HIPAA, SOC2, RAI, etc.)
- In-depth understanding of a specific industry or a broad range of industries.
Required languages
English | B2 - Upper Intermediate |
Data Science, Machine Learning, NLP, Deep Learning, Data Science/Machine Learning, Consulting, pre-sale, ML, AI/ML/DL, Generative AI
📊
$4500-7000
Average salary range of similar jobs in
analytics →
Loading...