Senior Data Engineer

Description

Our client is a global technology and manufacturing company with a long history of innovation across multiple industries, including industrial solutions, worker safety, and consumer goods. Headquartered in the United States, the company develops and produces a wide range of products โ€“ from adhesives, abrasives, and protective materials to personal safety equipment, electronic components, and optical films. With tens of thousands of products in its portfolio and operations in markets around the world, it plays a key role in delivering high-quality, reliable solutions for both businesses and consumers.

Requirements

We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.

Required Qualifications & Skills

  • 5+ years of professional experience in data engineering or a related role.
  • A minimum of 3 years of deep, hands-on experience using Python for data processing, automation, and building data pipelines.
  • A minimum of 3 years of strong, hands-on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
  • Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
  • Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
  • Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
  • Proficiency in data quality assessment and improvement techniques.
  • Experience working with and cleansing a variety of data formats, including unstructured and semi-structured data (e.g., CSV, JSON, Parquet, XML).
  • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
  • Excellent problem-solving skills and the ability to communicate complex technical concepts effectively to both technical and non-technical audiences.

Preferred Qualifications & Skills

  • Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
  • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
  • Experience with consuming data from REST APIs.
  • Experience with database design, optimization, and performance tuning for software application backends.
  • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
  • Familiarity with modern data architecture concepts such as Data Mesh.
  • Real-world experience supporting and troubleshooting critical, end-to-end production data pipelines.

Job responsibilities

Key Responsibilities

  • Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
  • End-to-End Data Solutions: Architect and implement end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
  • Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
  • Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
  • Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost-efficiency, ensuring our systems can handle growing data volumes.
  • Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid-level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
  • Stakeholder Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements, provide technical solutions, and deliver actionable data products.
  • System Maintenance: Support and troubleshoot production data pipelines, identify root causes of issues, and implement effective, long-term solutions.

Required skills experience

Data Engineering 5 years
Python 3 years
SQL 3 years
Cloud data services 3 years
Big Data processing frameworks 3 years
ETL 3 years

Required languages

English B2 - Upper Intermediate
Published 24 October
10 views
ยท
1 application
To apply for this and other jobs on Djinni login or signup.
Loading...