Data Engineer (Databricks) Offline

At Uvik Software, we are seeking an experienced Data Engineer with strong hands-on expertise in Databricks and distributed data processing frameworks. You will be a key contributor to designing, building, and maintaining scalable data pipelines and analytics platforms in cloud environments. This is a long-term opportunity to work on high-impact data projects in a dynamic, cloud-native environment.


Key Responsibilities:

- Design, develop, and maintain scalable ETL/ELT pipelines using Apache Spark on Databricks
- Build robust data architectures to support advanced analytics and machine learning use cases
- Collaborate with data scientists, analysts, and other engineers to ensure data quality and accessibility
- Optimize Spark jobs and clusters for performance, cost-efficiency, and reliability
- Implement data integration from various sources (structured, semi-structured, unstructured)
- Use CI/CD pipelines for code integration, testing, and deployment to production
- Monitor and troubleshoot production data pipelines and Spark jobs
- Leverage the latest Databricks features (Unity Catalog, Delta Live Tables, Workflows, etc.)
- Follow data governance, compliance, and security best practices


 

Must-Have Qualifications:

- 5+ years of experience in data engineering, data pipelines, and analytics platforms
- Hands-on experience in delivering 6–8+ data engineering projects on Databricks
- Proven understanding of Spark runtime internals and advanced distributed computing
- Deep expertise in at least one cloud platform: AWS, Azure, or GCP
- Working knowledge of at least two cloud ecosystems (e.g., AWS + Azure, or Azure + GCP)
- Databricks Data Engineering Professional certification completed, along with required foundational courses
- Strong proficiency in Python or Scala for Spark-based development
- Familiar with Delta Lake, Lakehouse architecture, and data versioning
- Solid knowledge of SQL, data modeling, and data warehousing concepts
- Experience with CI/CD tools such as Git, Jenkins, Azure DevOps, or GitLab CI
- Comfortable working in agile teams and communicating in English (B2+ level)


 

Nice to Have:

- Experience with Databricks Unity Catalog, Auto Loader, Delta Live Tables, or MLflow
- Familiarity with infrastructure as code (IaC) tools like Terraform or CloudFormation
- Exposure to data governance tools or frameworks (e.g., Collibra, Alation)
- Understanding of ML engineering workflows and deployment patterns
- Contributions to open-source or internal data frameworks

The job ad is no longer active

Look at the current jobs Data Engineer →

Loading...