Data Developer (BI\Analytics) IRC276594
Join a new team building a modern, cloud-native data platform from the ground up for a leader in the healthcare industry. You will be a core builder of this greenfield project, designing and implementing the data pipelines that will ingest partner data, transform it into valuable assets, and power critical business analytics in Tableau. This is a unique opportunity to build a best-in-class data solution on the Microsoft Azure stack.
As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set, working alongside highly experienced and talented people.
If this sounds like an exciting opportunity for you, send over your CV!
Requirements (Must-Haves)
- Azure Data Stack: Deep, hands-on experience with the core Azure data services:
- Azure Data Factory (ADF)
- Azure Databricks
- Azure Data Lake Storage (ADLS)
- Azure Synapse Analytics
- Programming: Strong proficiency in Python and Spark (PySpark).
- SQL: Advanced SQL skills for complex querying, data modeling, and performance tuning.
- Data Warehousing Concepts: Solid understanding of ETL/ELT principles and data modeling for analytics.
Nice-to-Have Skills
- Streaming Experience: Familiarity with real-time stream processing using Azure Event Hubs and Databricks -Structured Streaming (for our future Option C architecture).
- Advanced Databricks: Experience with Delta Live Tables (DLT) and Unity Catalog (for our Option B architecture).
- CI/CD: Practical experience with CI/CD pipelines in Azure DevOps (inc. practical experience using Azure DevOps to automate the deployment of data platform components, such as ADF pipelines, Databricks notebooks, and Synapse SQL scripts)
- BI Familiarity: Understanding of how BI tools like Tableau connect to and consume data from a data warehouse.
Job responsibilities
- Data Pipeline Development: Design, build, and maintain scalable and reliable ELT data pipelines on the Azure platform, using Azure Data Factory (ADF) for orchestration and Azure Databricks for transformation.
- Data Transformation: Write high-quality, efficient Python and PySpark code within Databricks to clean, validate, enrich, and reshape complex raw data into curated, analysis-ready datasets.
- Data Modeling: Implement and optimize data models (e.g., star schemas) within Azure Synapse Analytics to ensure high performance for BI queries and reporting.
- Ingestion & Integration: Develop and support the data ingestion process, handling data from partner APIs using Azure Functions.
- Collaboration & DevOps: Work closely with the Solution Architect, BI Developer, and DevOps Engineer to build an integrated solution and automate the deployment of your data pipelines using Azure DevOps (ADO).
Required languages
English | B2 - Upper Intermediate |
Azure Data Lake Store, azure data factory, SQL, ETL, Data Modeling, Python, Databricks, Azure Datawarehouse
๐
Average salary range of similar jobs in
analytics โ
Loading...