Data Engineer

We are looking for a Senior Data Engineer to join on a freelance basis to drive R&D initiatives, PoCs, and architecture validations for our enterprise and startup clients. You’ll work at the edge of innovation—validating modern data technologies, designing scalable prototypes, and enabling real-world applications in complex, regulated environments. This role is ideal for an experienced engineer with strong system thinking, deep knowledge of modern data architectures, and the ability to move fast in R&D cycles that could mature into production environments.


What You’ll Do

  • Design and build proof-of-concept pipelines on cloud-native environments (AWS/GCP) to validate performance, scalability, and architecture.
  • Work across OLAP systems like ClickHouse, Redshift, BigQuery, and support data persistence strategies for batch and real-time use cases.
  • Contribute to the design of data-agnostic platforms capable of working across structured, semi-structured, and unstructured data sources.
  • Evaluate and experiment with modern approaches such as Data Mesh, Lakehouse, and unified metadata/catalog strategies.
  • Prototype graph-based analytics components where applicable (e.g., Neo4j, Amazon Neptune).
  • Collaborate with architects, AI/ML engineers, and domain experts to deliver validated data foundations for further automation and intelligence.
  • Work with enterprise teams, adapting solutions to their compliance, security, and governance requirements.

     

Required Skills & Experience

  • 7+ years in data engineering, with a strong record of delivering backend and infrastructure for large-scale data systems.
  • Hands-on experience with AWS and/or GCP (IAM, VPCs, storage, compute, cost control).
  • Proven use of ClickHouse, Redshift, BigQuery, or similar for high-performance analytical workloads.
  • Practical knowledge of Lakehouse, Data Mesh, Data Lake + Warehouse hybrid models.
  • Experience building data ingestion frameworks (batch & streaming), including CDC, schema evolution, and orchestration.
  • Strong Python or Go; advanced SQL; CI/CD familiarity.
  • Comfort interacting with enterprise stakeholders; clear, concise documentation and proposal skills.
  • Product-oriented, research-driven, and able to handle ambiguity while delivering value fast.

    Bonus Points
  • Experience with graph technologies (e.g., Neo4j, Neptune, TigerGraph).
  • Familiarity with dbt, Airflow, Dagster, or similar orchestrators.
  • Knowledge of open metadata/catalog tools (OpenMetadata, DataHub, Amundsen).
  • Experience in highly regulated or enterprise environments.
  • Involvement in cloud cost optimization, FinOps, or scalable query engine evaluation.


Engagement Details

  • Type: Freelance / B2B contract
  • Extension: High potential to convert into a core team role or longer-term engagement
  • Location: Remote (preference for overlap with European time zones)

Required languages

English B2 - Upper Intermediate
Python, SQL, Data Warehouse, GCP, Apache Kafka
Published 22 December
32 views
·
4 applications
To apply for this and other jobs on Djinni login or signup.
Loading...