Senior Data Engineer

Our client is a global jewelry manufacturer undergoing a major transformation, moving from IaaS-based solutions to a modern Azure PaaS data platform. As part of this journey, you will design and implement scalable, reusable, and high-quality data products using technologies such as Data Factory, Data Lake, Synapse, and Databricks. These solutions will enable advanced analytics, reporting, and data-driven decision-making across the organization. By collaborating with product owners, architects, and business stakeholders, you will play a key role in maximizing the value of data and driving measurable commercial impact worldwide.

 

Responsibilities:

  • Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform.
  • Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models.
  • Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies.
  • Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance.
  • Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions.
  • Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits.
  • Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals.
  • Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering.

     

    Skills Required:

  • Must-Have Skills
  • 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server.
  • Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes.
  • Practical experience with Azure Data Factory, Databricks, and PySpark.
  • Track record of designing, building, and delivering production-ready data products at enterprise scale.
  • Strong analytical skills and ability to translate business requirements into technical solutions.
  • Excellent communication skills in English, with the ability to adapt technical details for different audiences.
  • Experience working in Agile/Scrum teams.
  • Nice-to-Have Skills
  • Familiarity with infrastructure tools such as Kubernetes and Helm.
  • Experience with Kafka.
  • Experience with DevOps and CI/CD pipelines.

Required languages

English B2 - Upper Intermediate
Published 7 October
15 views
·
1 application
To apply for this and other jobs on Djinni login or signup.
Loading...