Senior Full-Stack Research Engineer $$$$
Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
This platform provides an AI-driven intelligence layer for real estate underwriting, focusing on short-term bridge financing. It ingests diverse public and user-submitted property data—ranging from financial documents and renovation plans to market listings—and applies OCR, classification, schema extraction, and large language models to deliver comparable asset analysis and risk scoring.
The system operates at scale, processing thousands of complex documents monthly with strict latency and accuracy requirements. Key challenges include normalizing unstructured inputs, aligning multiple data schemas, orchestrating agentic AI workflows, integrating multi-model LLM pipelines, and maintaining production reliability under third-party outages. Strong engineers are critical to own the full research-to-production lifecycle, optimize AI flows, and ensure measurable business impact.
About the Role:
As a Senior Full-Stack Research Engineer on our AI research team, you will drive end-to-end development from hypothesis and prototyping to production deployment. You will design, build, and optimize critical pipelines that extract, classify, and reason over complex financial and property documents using frameworks such as DSPy and LangGraph. You will implement and maintain FastAPI services and TypeScript integrations, ensuring system resilience, observability, and scalable data aggregation.
You will lead systematic evaluations of AI precision and recall, identify bottlenecks, and apply advanced techniques—like multi-step reasoning or custom chunking—to boost performance. This role demands ownership, rapid iteration from Jupyter notebooks to containerized services, and close collaboration on architecture to define a research-to-release playbook for next-generation initiatives.
Key Responsibilities:
- Optimize AI flow pipelines (OCR → classify → extract) using LangGraph and DSPy to reduce error rates and latency on financial and property documents.
- Orchestrate fault-tolerant, agentic workflows that connect document classification, OCR, and schema extraction modules into cohesive state machines.
- Conduct gap analyses with custom evaluation suites, measure precision and recall, and implement prompt/evaluation loop improvements and multi-step reasoning strategies.
- Prototype LLM-based models in Jupyter Notebooks and productionize them via FastAPI endpoints and TypeScript client integrations.
- Build and scale data aggregation systems, normalize heterogeneous schemas, and resolve conflicts across external real estate datasets.
- Implement robust caching, retry logic, and observability (tracing, structured logging) to maintain resilience against service outages.
- Translate research prototypes into tested, containerized services and integrate them into CI/CD pipelines.
- Collaborate on defining the architecture and establish a standard research-to-release playbook for upcoming AI initiatives.
Required Competence and Skills:
- 5+ years as a backend developer, with recent experience driving end-to-end AI research and applying LLMs for verification (prompt/evaluation loops) and broader problem solving.
- Experience working on AI-centric products, contributing to both backend development and the integration of AI as a core component of the system.
- Proficiency in Python or Node.js.
- Understanding of LLM orchestration tools (e.g., LangGraph), prompt optimization frameworks (e.g., DSPy), vector retrieval systems (e.g., Weaviate, Elasticsearch), and multi-model integrations (OpenAI, Gemini, vLLM).
- Strong skills in Docker, container orchestration, message queues, CI/CD, and observability practices.
- Demonstrated ability to design complex data schemas and perform multi-source data fusion.
- Comfortable using Jupyter Notebooks for rapid experimentation, benchmarking, and metric-driven prototyping.
- Experience with document pipelines, including OCR, parsing, classification, and handling noisy real-world inputs.
Nice to Have:
- TypeScript in strict mode with Zod schema validation.
- Graph database modeling experience with Neo4j.
Why Us:
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
Required languages
| English | B2 - Upper Intermediate |