Senior Data Engineer to $6000
Job Description
- Solid experience with the Azure data ecosystem: Data Factory, ADLS Gen2, Azure SQL, Blob Storage, Key Vault, and Functions
- Proficiency in Python and SQL for building ingestion, transformation, and processing workflows
- Clear understanding of Lakehouse architecture principles, Delta Lake patterns, and modern data warehousing
- Practical experience building config-driven ETL/ELT pipelines, including API integrations and Change Data Capture (CDC)
- Working knowledge of relational databases (MS SQL, PostgreSQL) and exposure to NoSQL concepts
- Ability to design data models and schemas optimized for analytics and reporting workloads
- Comfortable working with common data formats: JSON, Parquet, CSV
- Experience with CI/CD automation for data workflows (GitHub Actions, Azure DevOps, or similar)
- Familiarity with data governance practices: lineage tracking, access control, encryption
- Strong problem-solving mindset with attention to detail
Clear written and verbal communication for async collaboration
Nice-to-Have
- Experience with Azure Databricks (Delta Live Tables, Unity Catalog, Time Travel)
- Proficiency with Apache Spark using PySpark for large-scale data processing
- Experience with Azure Service Bus/Event Hub for event-driven architectures
- Familiarity with machine learning and AI integration within data platform context (RAG, vector search, Azure AI Search)
- Data quality frameworks (Great Expectations, dbt tests)
Experience with Power BI semantic models and Row-Level Security
Job Responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Azure Data Factory, Synapse, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for batch and API-based data ingestion
- Build Medallion architecture layers (Bronze β Silver β Gold) ensuring efficient, reliable, and performant data processing
- Ensure data governance, lineage, and compliance using Azure Key Vault and proper access controls
- Collaborate with developers and business analysts to deliver trusted datasets for reporting, analytics, and AI/ML use cases
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Implement cross-system identity resolution (global IDs, customer/property keys across 8+ platforms)
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews and foster knowledge sharing within the team
Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
Why TeamCraft?
- Greenfield project - build architecture from scratch, no legacy debt
- Direct impact - your pipelines power real AI products and business decisions
- Small team, big ownership - no bureaucracy, fast iteration, your voice matters
- Stable foundation - US-based company, 300+ employees
- Growth trajectory - scaling with technology as the driver
About the Project
TeamCraft is a large U.S. commercial roofing company undergoing an ambitious AI transformation. Weβre building a centralized data platform from scratch - a unified Azure Lakehouse that integrates multiple operational systems into a single source of truth (Bronze -> Silver -> Gold).
This is greenfield development with real business outcomes - not legacy maintenance.
Required skills experience
| Python | 3 years |
| SQL | 3 years |
| Azure | 1 year |
Required languages
| English | B2 - Upper Intermediate |
π
$4000-6000
Average salary range of similar jobs in
analytics β
Loading...