Cloud Platform Engineer
About ListKit
ListKit is a B2B lead generation platform processing 825M+ person records, serving thousands of customers across DFY (done-for-you) and PLG (self-serve) segments. Our data enrichment pipeline runs on Google Cloud (Cloud Run, PubSub, Cloud Tasks, Firestore) with TypeScript/Node.js, backed by SingleStore for high-performance JOINs against our 825M+ record dataset and SQL Server for transactional order data.
We are an AI-first engineering team. Every engineer uses Claude (Anthropic) as a core development tool for architecture decisions, code review, debugging, estimation, and daily workflows. If you haven't worked with AI coding tools yet, that's fine, but you need to be excited about making it your primary way of working.
The Role
You will own the cloud-native data enrichment platform that powers ListKit's core product: turning customer search criteria into verified, enriched lead lists.
The enrichment service (TypeScript/Node.js on GCP) orchestrates the full pipeline: receiving order requests, querying 825M+ records via SingleStore stored procedures, routing records through email finder APIs (Enrow, Prospeo, LeadMagic) and email verifiers (MillionVerifier, TryKitt), tracking job lifecycle via Firestore counters, processing billing/refunds, and delivering final exports to GCS.
The integration challenge: This service must work in lockstep with our .NET backend, which manages the customer-facing order lifecycle in SQL Server. Today, these are two disconnected state machines. Your first major project is building the completion bridge so the enrichment platform's results flow back to the .NET order processing pipeline. You don't need to be a .NET developer; we have a strong .NET team. But you need to understand the integration surface and design clean APIs between the two systems.
What You Will Do
First 30 days
- Take ownership of the listkit-enrichment codebase (TypeScript/Node.js, Cloud Run)
- Understand the full enrichment flow: PubSub dispatch, Cloud Tasks per-row processing, provider waterfall, verification chains, Firestore lifecycle counters, SingleStore JOINs, billing/refund integration
- Build the completion callback: when GCP enrichment finishes, write results back to SQL Server's OrderProcessing table so the .NET pipeline detects completion
- Fix the data-retrieval PubSub subscriber gap
First 90 days
- Own the enrichment platform end-to-end: reliability, performance, monitoring, alerting
- Improve provider waterfall logic: smarter failover between Enrow, Prospeo, LeadMagic, and the SingleStore JOIN path
- Build observability: structured logging, Cloud Monitoring dashboards, SLA tracking per provider
- Reduce enrichment latency for large orders (1000+ records)
- Work with the .NET team to design the target architecture where GCP enrichment fully replaces the Windows Services
Ongoing
- Adding external email finding/verification API providers to Cloud Run/Cloud Tasks
- Optimize SingleStore stored procedures (sp_enrich_csv) for the 825M+ record dataset
- Scale the enrichment platform for higher throughput as order volume grows
- Build and maintain the external enrichment API (API key auth, rate limiting, usage tracking)
- Contribute to infrastructure-as-code (Terraform for GCP resources)
Required Skills
TypeScript/Node.js (3+ years)
- Express.js, async/await, Promises, streaming
- Building and consuming HTTP APIs at scale
- Error handling in distributed async workflows (retries, idempotency, dead-letter queues)
Google Cloud Platform (2+ years, or equivalent AWS/Azure)
- Cloud Run (containerized services, autoscaling, concurrency)
- PubSub (message queues, subscriptions, delivery guarantees)
- Cloud Tasks (HTTP task dispatch, rate limiting, retry policies)
- Firestore (document database, transactions, atomic counters)
- GCS (object storage, signed URLs, streaming reads)
- Terraform (infrastructure-as-code)
Database experience
- SQL: complex JOINs, stored procedures, query optimization against 100M+ row tables
- SingleStore, MySQL, or PostgreSQL (enrichment service uses SingleStore)
- Basic SQL Server knowledge for the integration surface
Distributed systems thinking
- State tracked across multiple stores (Firestore + SingleStore + SQL Server)
- Eventual consistency, idempotency guards, distributed locking
- Job lifecycle management: queued, processing, completed, failed
- Rate limiting against third-party APIs with different throttle characteristics
AI-first development mindset
- Experience with AI coding assistants (Claude, GitHub Copilot, Cursor, or similar)
- Willingness to use Claude as your primary tool for code review, debugging, architecture, and estimation
- We estimate all engineering work assuming AI-assisted development. Mechanical work compresses dramatically. You estimate only irreducible human work.
Nice to Have
- Experience integrating with third-party enrichment/data APIs (email finders, verifiers, data providers)
- Experience building multi-tenant SaaS APIs with API key authentication and per-plan rate limiting
- Stripe billing integration (credits, refunds, usage-based pricing)
- Understanding of .NET/C# (reading code, understanding integration points)
- Event-driven architectures and CQRS patterns
- Familiarity with Linear for project management
How We Work
- AI-first: Claude is embedded in everything we do. Code review, spec writing, debugging, Linear issue management. We track AI adoption scores and expect engineers to reach Vanguard tier.
- Async by default: Team spans Lisbon (WET/WEST), Ukraine (EET) and US. Written communication via Slack, Linear, and documented decisions.
- Ship fast, fix fast: Continuous deployment for hotfixes, scheduled release trains for features. Production incidents get addressed immediately.
- Ownership mentality: You own your systems end-to-end. If the enrichment platform breaks, you fix it.
Tools & Stack
Your primary stack
- TypeScript (Node.js 20+) on Google Cloud Run
- Google Cloud: PubSub, Cloud Tasks, Firestore, GCS, Secret Manager, Cloud Build
- SingleStore (825M+ row dataset, stored procedures, columnar JOINs)
- Terraform (infrastructure-as-code)
- Third-party APIs: Enrow, Prospeo, LeadMagic, MillionVerifier, TryKitt, AtData
Integration surface
- SQL Server (read/write to OrderProcessing table for the .NET bridge)
- .NET backend (you design the API contract; the .NET team implements their side)
Shared tooling
- Claude (Opus/Sonnet via claude.ai, Claude Code CLI, MCP connectors)
- Linear (project management with Claude-integrated workflows)
- Slack, Google Meet, Granola (meeting notes)
- GitHub Actions (CI/CD), Stripe (billing), Intercom (support), HubSpot (CRM)
How to Apply
Send your CV and a short note (3-4 sentences) answering one of:
1. "Describe a Cloud Run or serverless service you built that processes jobs asynchronously via PubSub/SQS/EventBridge. How did you handle job lifecycle, retries, and completion detection?"
2. "Tell us about a time you had to integrate two systems that tracked state in different databases. How did you design the bridge?"
We will prioritize candidates who can start within 2-3 weeks.
Required skills experience
| TypeScript | 3 years |
| Node.js | 3 years |
| GCP (Google Cloud Platform) | 2 years |
| Databases | 2 years |
Required domain experience
| SaaS | 2 years |
Required languages
| English | C2 - Proficient |