Senior Data Engineer Azure (IRC278989)
Job Description
- Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
- Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
- Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
- Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
- Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
- Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
- Strong understanding of data modeling, schema design, and database performance optimization
- Practical experience working with various file formats, including JSON, Parquet, and ORC
- Familiarity with machine learning and AI integration within the data platform context
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
- Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
- Strong analytical and problem-solving skills with attention to detail
- Excellent teamwork and communication skills
- Upper-Intermediate English (spoken and written)
Job Responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
- Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
- Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
- Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
- Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, estimation, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
- Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
Department/Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.
Required languages
English | B2 - Upper Intermediate |