Data Engineer Offline

We are looking for a Senior Data Engineer to join on a freelance basis to drive R&D initiatives, PoCs, and architecture validations for our enterprise and startup clients. You’ll work at the edge of innovation—validating modern data technologies, designing scalable prototypes, and enabling real-world applications in complex, regulated environments.This role is ideal for an experienced engineer with strong system thinking, deep knowledge of modern data architectures, and the ability to move fast in R&D cycles that could mature into production environments.


What You’ll Do

  • Design and build proof-of-concept pipelines on cloud-native environments (AWS/GCP) to validate performance, scalability, and architecture.
  • Work across OLAP systems like ClickHouse, Redshift, BigQuery, and support data persistence strategies for batch and real-time use cases.
  • Contribute to the design of data-agnostic platforms capable of working across structured, semi-structured, and unstructured data sources.
  • Evaluate and experiment with modern approaches such as Data Mesh, Lakehouse, and unified metadata/catalog strategies.
  • Prototype graph-based analytics components where applicable (e.g., Neo4j, Amazon Neptune).
  • Collaborate with architects, AI/ML engineers, and domain experts to deliver validated data foundations for further automation and intelligence.
  • Work with enterprise teams, adapting solutions to their compliance, security, and governance requirements.

     

Required Skills & Experience

  • 7+ years in data engineering, with a strong record of delivering backend and infrastructure for large-scale data systems.
  • Hands-on experience with AWS and/or GCP (IAM, VPCs, storage, compute, cost control).
  • Proven use of ClickHouse, Redshift, BigQuery, or similar for high-performance analytical workloads.
  • Practical knowledge of Lakehouse, Data Mesh, Data Lake + Warehouse hybrid models.
  • Experience building data ingestion frameworks (batch & streaming), including CDC, schema evolution, and orchestration.
  • Strong Python or Go; advanced SQL; CI/CD familiarity.
  • Comfort interacting with enterprise stakeholders; clear, concise documentation and proposal skills.
  • Product-oriented, research-driven, and able to handle ambiguity while delivering value fast.

    Bonus Points
  • Experience with graph technologies (e.g., Neo4j, Neptune, TigerGraph).
  • Familiarity with dbt, Airflow, Dagster, or similar orchestrators.
  • Knowledge of open metadata/catalog tools (OpenMetadata, DataHub, Amundsen).
  • Experience in highly regulated or enterprise environments.
  • Involvement in cloud cost optimization, FinOps, or scalable query engine evaluation.


Engagement Details

  • Type: Freelance / B2B contract
  • Extension: High potential to convert into a core team role or longer-term engagement
  • Location: Remote (preference for overlap with European time zones)

Required skills experience

OLAP
ClickHouse
Python
Jupyter Notebook
FastAPI
Data Warehouse
BI
GCP (Google Cloud Platform)
AWS

Required languages

English B2 - Upper Intermediate
OLAP, ClickHouse, Python, Jupyter Notebook, FastAPI, DataWarehouse, BI, GCP, AWS

The job ad is no longer active

Look at the current jobs Data Engineer →

Loading...