Full-Stack AI Platform Engineer $$$$
Data Science UA is a service company with deep expertise in AI and Data Science. Our story started in 2016 with the first Data Science UA Conference in Kyiv, and since then, we’ve built one of the largest AI communities in Europe.
About the role:
We are looking for a Full-Stack AI Platform Engineer to join our team and help build, maintain, and scale AI-powered interactive products.
This is a hands-on role for an engineer who owns deployment, infrastructure, real-time systems, and AI-service integration end to end. You will work on production systems that combine web applications, real-time communication, Kubernetes-based backend services, and GPU-hosted AI pipelines.
The core of this role is production reliability and deployment ownership — keeping live systems stable, responding to incidents, and integrating AI models into scalable real-time infrastructure. Frontend work exists but is secondary: the team has a designer and uses AI-assisted tooling for UI iteration.
Responsibilities:
- Own deployment, environment management, and release workflows across Kubernetes-based, Railway-hosted, and other cloud infrastructure.
- Respond to production incidents, perform rollbacks, and restore service stability, including outside business hours.
- Deploy and operate AI inference services on GPU platforms such as Modal or RunPod and integrate them into real-time product flows.
- Build and maintain real-time communication and async processing flows, including LiveKit-based session infrastructure.
- Work with TypeScript / JavaScript and Python application code in production environments.
- Support database-backed product features, schema evolution, and migration workflows.
- Improve observability, logging, monitoring, and incident debugging workflows.
- Contribute to frontend and backend product functionality as needed, using AI-assisted tooling where appropriate.
- Maintain clear and up-to-date documentation for every product, service, and integration, including architecture diagrams, API contracts, deployment procedures, and environment configuration.
- Ensure every new feature is discussed, scoped, and agreed upon before implementation begins. No undocumented or uncoordinated changes to production systems.
- Collaborate with engineering and product stakeholders to clarify requirements, identify risks, and ship practical solutions.
Required skills:
- Strong hands-on experience with TypeScript (Node.js) and Python for production backend development.
- Solid experience with Kubernetes, Helm, and container-based deployment workflows.
- Experience with real-time systems: WebSocket-based communication, session lifecycle management, and event-driven flows.
- Experience deploying and operating AI inference services on GPU platforms such as Modal, RunPod, Replicate, or similar.
- Experience integrating LLM-based services, voice AI APIs, or media-processing pipelines into production systems.
- Experience debugging distributed or multi-service systems, including reading logs across multiple services and environments.
- Comfort with production ownership: incident response, rollbacks, and on-call responsibility.
- Solid understanding of backend APIs, service integration, and system design.
- Experience with PostgreSQL and ORM-backed schema management and migrations.
- Sufficient frontend experience to ship and deploy UI changes with AI-assisted tooling.
- Ability to work independently and bring structure to ambiguous technical situations.
- Strong written and verbal English communication skills.
- Experience using AI-assisted coding tools (e.g., Cursor, GitHub Copilot, Gemini) for faster development and iteration
Nice to have:
- Hands-on experience with real-time audio/video systems such as LiveKit or similar platforms.
- Experience with avatar, video rendering, or real-time media processing pipelines.
- Experience with frameworks and tooling such as TanStack Start, Nitro, or Vite.
- Familiarity with vector databases or embedding-based retrieval (pgvector or similar).
- Familiarity with Python ML stacks such as PyTorch or CUDA-based GPU inference.
- Experience with voice AI pipelines involving services such as ElevenLabs or Deepgram.
- Experience with CI/CD pipelines such as Jenkins, GitHub Actions, or similar.
- Experience with Next.js and Nx monorepo environments.
- Experience with GraphQL / Apollo or similar API integration patterns.
What we value:
- Strong ownership and reliability, especially in production.
- Pragmatic engineering judgment: shipping something stable beats over-engineering.
- Comfort with production responsibility and debugging under pressure.
- Ability to move across backend, infrastructure, and frontend without losing clarity.
- Clear communication, honest estimation, and practical execution.
- Documentation discipline: if it is not written down, it does not exist.
What we offer:
- Real impact: your content will directly attract new clients.
- Mentorship and growth.
- Freedom to experiment and test new formats.
- Flexible schedule and a remote-first culture.
Required languages
| English | B2 - Upper Intermediate |
| Ukrainian | Native |