Senior Data Engineer

  • Project Description:

    The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.

    Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.

    Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.

     

 

  • Responsibilities:

    We are looking for Data Engineer who will be responsible for designing a solution for a big retail company. The main focus is to support processing of big data volumes and integrate solution to current architecture.

     

 

  • Mandatory Skills Description:

    • Strong, recent hands-on expertise with Azure Data Factory and Synapse is a must (3+ years).
    • Strong expertise in designing and implementing data models, including conceptual, logical, and physical data models, to support efficient data storage and retrieval.
    • Strong knowledge of Microsoft Azure, including Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, and Azure Databricks, pySpark for building scalable and reliable data solutions.
    • Extensive experience with building robust and scalable ETL/ELT pipelines to extract, transform, and load data from various sources into data lakes or data warehouses.
    • Ability to integrate data from disparate sources, including databases, APIs, and external data providers, using appropriate techniques such as API integration or message queuing.
    • Proficiency in designing and implementing data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault)
    • Proficiency in SQL to perform complex queries, data transformations, and performance tuning on cloud-based data storages.
    • Experience integrating metadata and governance processes into cloud-based data platforms
    • Certification in Azure, Databricks, or other relevant technologies is an added advantage
    • Experience with cloud-based analytical databases.
    • Experience with Azure MI, Azure Database for Postgres, Azure Cosmos DB, Azure Analysis Services, and Informix.
    • Experience with Python and Python-based ETL tools.
    • Experience with shell scripting in Bash, Unix or windows shell is preferable.

     

 

  • Nice-to-Have Skills Description:

    • Experience with Elasticsearch
    • Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
    • Troubleshooting and Performance Tuning: Ability to identify and resolve performance bottlenecks in data processing workflows and optimize data pipelines for efficient data ingestion and analysis.
    • Collaboration and Communication: Strong interpersonal skills to collaborate effectively with stakeholders, data engineers, data scientists, and other cross-functional teams.

 

 

  • Languages:
    • English: B2 Upper Intermediate

Required languages

English B2 - Upper Intermediate
Synapse, Azure, Spark, ETL, SQL, DWH, Delta Lakehouse
Published 11 September
20 views
·
0 applications
To apply for this and other jobs on Djinni login or signup.
Loading...