Data Engineer (Product Analytics)

Equals 5 Verified Employer

We are looking for a Data / Backend Engineer to architect and scale the core data infrastructure behind our healthcare marketing platform. In this role, you will design high-throughput data systems that process millions of behavioral events, power real-time attribution and user scoring, and enable precise targeting of healthcare professionals across 20+ channels.

You will work at the intersection of data engineering, backend systems, and marketing infrastructure, building scalable pipelines, tracking systems, and APIs that support analytics, segmentation, and campaign activation. The role also involves experimenting with AI-powered automation and agent-based workflows to improve internal operations and product capabilities.

This is a high-ownership position where you will shape architecture decisions, build critical platform components from the ground up, and help evolve our data platform as the company scales.

 

Responsibilities

- Architect and own data-heavy backend systems powering real-time attribution, user modeling, and scoring

- Build and evolve client-side tracking pixels (like FB pixel / Amplitude snippet / Heap autocapture) from scratch

- Set up and manage scalable ETL workflows: cleaning, merging, enriching, splitting, transforming across millions of behavioral events
Lead the build of CDP / CDXP-style data layers to support segmentation, scoring, and activation across channels

- Connect and compose systems using modern low-code / no-code tools (n8n) to accelerate ops

- Implement and maintain high-availability APIs for analytics, scoring, and campaign delivery

- Drive experimentation around AI-powered agents and automation tools that can boost internal workflows

 

Requirements

- System Design & Data Architecture:
Designs scalable, event-driven data platforms with strong decoupling, fault tolerance, and horizontal scalability. Builds real-time and batch pipelines using lakehouse architectures and streaming systems (Kafka / PubSub).

- Distributed Data Processing (Spark / BigQuery / Iceberg):
Expert in large-scale data processing, query optimization, and cost-efficient analytics. Deep understanding of Spark execution, partitioning, and modern table formats (Iceberg / Delta).

- Cloud & Infrastructure (GCP / Kubernetes):
Builds resilient cloud-native data platforms on GCP and Kubernetes using IaC, autoscaling, and secure access (IAM, Workload Identity).

- Python Engineering:
Develops high-performance Python services for data and ML workloads with focus on concurrency, clean architecture, and production-grade quality.

- Databases & Performance (PostgreSQL / OLTP):
Optimizes high-throughput OLTP systems through indexing, partitioning, and query analysis (EXPLAIN ANALYZE, MVCC).

- Reliability & Observability (SRE):
Implements resilient data systems with idempotency, retries, observability, SLOs, and incident management.

- AI / LLM Integration:
Integrates LLMs into production pipelines with structured outputs, caching, batching, prompt versioning, and evaluation.

- Workflow Orchestration (Airflow / Dagster):
Designs scalable DAG-based workflows with dependency management, retries, and reproducible backfills.

- CI/CD & DevOps for Data:
Implements safe deployments for data systems including schema migrations, canary releases, and data contracts.

- Ownership & Engineering Maturity:
Owns end-to-end production systems and architecture decisions while balancing delivery speed, quality, and technical debt.

 

Nice-to-have: 

- Experience in MarTech / AdTech platforms (e.g., DSPs, attribution systems, DMP/CDP tools)

- Deep knowledge of data matching/fingerprinting techniques (identity graphs, IP + UA resolution, hashed PII workflows)

- Strong prompt engineering skills โ€” not just ChatGPT 101, but chaining, system prompting, tooling integration

 

What We Offer
- Fully remote with flexible hours (aligned with EU timezones for syncs).
- Influence on quality strategy across the entire engineering organization.
- A cross-team role with visibility into every part of the product.
- AI-first tooling.
- Claude Code licenses and cutting-edge AI development workflows. 
- A team with no bureaucracy, decisions are made fast.

Required languages

English B2 - Upper Intermediate
Ukrainian Native
Published 10 March
21 views
ยท
0 applications
To apply for this and other jobs on Djinni login or signup.
Loading...