AI Platform Engineer

AI Platform Engineer

(LLM Ops & Scalable ML)

 

Location โ€” Remote-first within the EU/EEA (occasional EU travel for team off-sites)
Department โ€” Engineering
Employment type โ€” Full-time, permanent
 

 

About Trialize

Trialize is transforming clinical trials with an AI-driven SaaS platform that automates study set-up, streamlines data flow and boosts data integrity. We serve pharmaceutical companies, biotech firms and CROs, helping them run faster, more reliable trials and bring life-changing therapies to patients sooner.

 

Role overview

We are looking for a AI Platform Engineer who can design, build and operate the next generation of our LLM-powered infrastructure.

 

Must-have experience

  • Proven success with MCP, A2A and LoRA (or other parameter-efficient fine-tuning methods) in production.
  • Demonstrably fast thinker / fast maker โ€“ you can prototype, benchmark and ship in days.
  • Pro-level coding in Python or TypeScript / JavaScript, including test automation and CI/CD.
  • Expertise in graph databases (schema design, Cypher/Gremlin, sharding, backup, HA).
  • Deep knowledge of async messaging patterns with Kafka, gRPC or tRPC.
  • Hands-on production experience with Kubernetes, Terraform and multi-cloud deployment.
  • Ability to stand up end-to-end solutions (similar to Lovable, Rork, or equivalents) autonomously.
  • Expert understanding of Retrieval-Augmented Generation (RAG) design patterns, latency trade-offs and evaluation metrics.

 

Nice-to-have

  • Familiarity with GPU scheduling (Karpenter, Kubeflow, Ray Serve).
  • Prior work on FDA-regulated or ISO-compliant software.
  • Contributions to open-source LLM ops or vector-database projects.

 

Why join Trialize?

  • Build the future of clinical AI โ€“ your work shortens the path to new medicines.
  • Autonomy & speed โ€“ ship without red tape in a senior, high-trust team.
  • Continuous learning โ€“ budget for conferences, certs and cloud credits.
  • Remote flexibility โ€“ work where you are most productive, meet the team quarterly.

 

How to apply

 

Attach your CV (PDF) and a brief cover letter.
In your letter, tell us โ€” in fewer than 100 words each:

 

  1. The fastest ML prototype you ever shipped (timeline, stack, impact).
  2. How you used LoRA or a similar method to slash training cost or inference latency.
  3. Your favourite RAG architecture and why you chose it.

 

We review applications on a rolling basis and aim to respond within ten working days.

Required languages

English C1 - Advanced
Python, Machine Learning, Git, SQL, Deep Learning, PyTorch, Data Science, Pandas, NLP, Docker
Published 6 July ยท Updated 15 September
Statistics:
70 views
ยท
6 applications
To apply for this and other jobs on Djinni login or signup.
Loading...