Data Engineer

Responsibilities:

  • Design and develop ETL pipelines using Airflow and Apache Spark for Snowflake and Trino
  • Optimize existing pipelines and improve the Airflow framework
  • Collaborate with analysts, optimize complex SQL queries, and help foster a strong data-driven culture
  • Research and implement new data engineering tools and practices

 

Requirements:

  • Experience with Apache Spark
  • Experience with Airflow
  • Proficiency in Python
  • Familiarity with Snowflake and Trino is a plus
  • Understanding of data architecture, including logical and physical data layers
  • Strong SQL skills for analytical queries
  • English proficiency at B1/B2 level

 

About the Project:

We’re a fast-growing tech startup in the B2B marketing space, developing a next-generation platform for identifying and engaging target customers.

Our product combines artificial intelligence, big data, and proprietary de-anonymization tools to detect behavioral signals from potential buyers in real time and convert them into high-quality leads.

The team is building a solution that helps businesses identify β€œhot” prospects even before they express interest β€” making marketing and sales efforts highly targeted and personalized.

Published 27 May
140 views
Β·
21 applications
0% read
Β·
72% responded
Last responded 1 week ago
To apply for this and other jobs on Djinni login or signup.
Loading...