Databricks Data Engineer

$$$$

We are looking for a Databricks Data Engineer to design and implement scalable data processing solutions using Databricks and Apache Spark. You will work with large datasets, develop transformation logic, and optimize performance of data workflows.

This role involves daily hands-on development and active participation in delivery projects.

 

What You Will Do

  • Design and develop data processing workflows using Databricks and Apache Spark 
  • Implement data transformations using PySpark 
  • Process and optimize large datasets in batch or streaming environments 
  • Build and maintain data pipelines within the Databricks platform 
  • Monitor, troubleshoot, and improve performance of Spark jobs 
  • Collaborate with architects and engineers on data platform solutions 

 

Required Technical Skills

Databricks and Data Processing

  • Hands-on experience with Databricks 
  • Strong experience with Apache Spark and PySpark 
  • Experience building and maintaining data processing pipelines 
  • Experience working with large datasets and distributed data processing 
  • Understanding of lakehouse or modern data platform architecture 

 

Programming and Tools

  • Strong Python 
  • Strong SQL 
  • Experience implementing data transformations and performance optimization 
  • Experience working with version control systems such as Git 
  • Experience working with common data formats such as Parquet, JSON, and CSV

 

Azure Platform

  • Azure Data Lake Storage (ADLS) 
  • Azure Databricks 
  • Azure Data Factory

Required languages

English C1 - Advanced
Published 30 April
3 views
ยท
1 application
To apply for this and other jobs on Djinni login or signup.
Loading...