Backend, MLOps Engineer

Company Background
PiñataFarms AI is a team that builds advanced AI products, allowing anyone to become creative by recontextualizing content to come together with others. Our products are used by millions. With a track record of major cultural and tech successes, we are using computer vision, mobile hardware, and consumer tech to create a generational phenomenon. Backed by top Silicon Valley VC's and influential strategic investors, we are at the forefront of innovation.

Role Description
We’re hiring a Backend & MLOps Engineer to own the Node.js backend and Python-based ML infrastructure that powers multiple consumer apps. You’ll operate across AWS Amplify/AppSync, Firebase, and on-prem GPUs, ensuring our services remain reliable, scalable, and cost-efficient. We are open to remote for this role and have teams in both the US and Europe.

What You’ll Be Doing:

  • Develop, deploy, and maintain backend APIs on AWS Amplify & AppSync (GraphQL) or GCP (Firebase)
  • Manage Docker-based Python CV/LLM inference pipelines, primarily on on-prem GPUs
  • Prototype rapid fixes or features, then harden them into production-grade systems
  • Collaborate with the product and the iOS team to deliver new endpoints and incremental features while meeting reliability and cost targets
  • Streamline and automate routine tasks with scripts and modern AI tooling
  • Maintain data pulls and pipelines – schedule and automate extracts from BigQuery, DynamoDB for analytics and reporting


What We’re Looking For:

  • 5+ years in backend engineering, DevOps, or MLOps
  • Strong proficiency in Python and solid Node.js/TypeScript skills
  • Hands-on experience with core AWS services; comfortable with GCP/Firebase basics
  • Demonstrated ability to learn quickly, prototype fast, and automate relentlessly – you default to scripting or AI-powered tools over manual work
  • Solid SQL skills and familiarity with data-pipeline best practices
  • Excellent written and verbal communication skills, plus a reliable on-call record


Bonus Points

  • Operating GPU workloads on-prem and in AWS/GCP
  • Prior use of LLM-based or AI-driven tooling to accelerate testing, data pulls, or infrastructure operations
Published 3 June
77 views
·
22 applications
100% read
·
81% responded
Last responded 3 days ago
To apply for this and other jobs on Djinni login or signup.
Loading...