Middle Data Engineer

Description:

Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology and products that advance every business, improve every home, and enhance every life.

Minimum Requirements:

  • Minimum of 4 years of experience in SQL and Python programming languages, specifically for data engineering tasks.
  • Proficiency in working with cloud technologies such as Azure or AWS.
  • Experience with Spark and Databricks or similar big data processing and analytics platforms
  • Experience working with large data environments, including data processing, data integration, and data warehousing.
  • Experience with data quality assessment and improvement techniques, including data profiling, data cleansing, and data validation.
  • Familiarity with data lakes and their associated technologies, such as Azure Data Lake Storage, AWS S3, or Delta Lake, for scalable and cost-effective data storage and management.
  • Experience with NoSQL databases, such as MongoDB or Cosmos, for handling unstructured and semi-structured data.
  • Fluent English

Additional Skillset (Nice to Have):

  • Familiarity with Agile and Scrum methodologies, including working with Azure DevOps and Jira for project management.
  • Knowledge of DevOps methodologies and practices, including continuous integration and continuous deployment (CI/CD).
  • Experience with Azure Data Factory or similar data integration tools for orchestrating and automating data pipelines.
  • Ability to build and maintain APIs for data integration and consumption.
  • Experience with data backends for software platforms, including database design, optimization, and performance tuning.

Job responsibilities:

  • Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big-data scales
  • Responsible for the design and implementation of data integration pipelines
  • Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storages
  • Take part in the full cycle of feature development (requirements analysis, decomposition, design, etc)
  • Contribute to the overall quality of development services through brainstorming, unit testing, and proactive offering of different improvements and innovations.

Required skills experience

Python 4 years
SQL 4 years
Data Engineering 4 years
Azure 4 years
Spark 4 years
Databricks 4 years
Data Warehouse 3 years
ETL 4 years
Datalake 3 years
Delta Lake 3 years
NoSQL 3 years
Big Data processing frameworks 3 years

Required languages

English B2 - Upper Intermediate
Published 12 February
15 views
ยท
2 applications
100% read
ยท
50% responded
Last responded 6 hours ago
To apply for this and other jobs on Djinni login or signup.
Loading...