Data Engineer

Who we are:

Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

 

About the Product: 

Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.

Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.

 

About the Role: 
We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

 

Key Responsibilities: 

  • Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
  • Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
  • Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
  • Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
  • Apply best practices for data security, integrity, and performance across all systems.

 

Required Competence and Skills:

  • 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
  • Proven track record in designing, developing, and deploying complex data applications.
  • Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
  • Experience with public cloud platforms (preferably GCP) and cloud-native data services.
  • Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
  • Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
  • Strong verbal and written communication skills in English.
  • Excellent communication skills and a strong team player, capable of working cross-functionally.

 

Nice to have:

  • Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
  • Experience working with Docker and Kubernetes.
  • Hands-on experience with CI tools such as GitHub Actions

 

Why Us?

We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

We provide full accounting and legal support in all countries we operate.

We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

We offer a highly competitive package with yearly performance and compensation reviews.

 

Published 13 June
27 views
·
3 applications
100% read
·
100% responded
Last responded yesterday
To apply for this and other jobs on Djinni login or signup.
Loading...