Senior Azure Data Engineer
Senior Azure Data Engineer
Location: Poland or Romania (Remote/Hybrid, depending on candidate preference)
Contract: Long-term
Domain: Automotive / Motors
Seniority: Senior
About the Role
We are seeking a highly skilled Senior Azure Data Engineer to join our team and help develop and deliver our new data strategy. You will play a key role in implementing a modern data architecture that ensures fast, reliable, and secure data access across the organization.
We are currently undergoing a major platform transformation โ modernizing our legacy Microsoft SQL environment into a cloud-first Azure Databricks ecosystem. You will be central in designing, implementing, and enhancing our data platform as well as building robust data ingestion and transformation pipelines.
This role combines deep technical expertise with strong collaboration skills. You will work closely with engineers, architects, and product owners to build a scalable and future-proof data environment.
Key Responsibilities
- Design, build, and optimize scalable data infrastructure, pipelines, and frameworks.
- Lead the refactoring of complex legacy systems and resolve high-impact technical challenges.
- Maintain existing systems to a high standard while driving incremental improvements.
- Contribute high-quality code aligned with best engineering practices (performance, reliability, security).
- Help implement the technical direction for the data engineering team.
- Support hiring, onboarding, and mentorship to grow engineering and data teams.
- Drive strong data culture and collaborate with third-party partners where needed.
- Work closely with product owners, architects, and engineers to deliver reliable, well-governed data solutions.
Requirements
Must-have:
- 5+ years of experience as a Data Engineer, ideally in cloud-based environments.
- Strong expertise in Azure data services (Azure Databricks, Data Lake, Data Factory, Synapse, etc.).
- Solid SQL experience plus comfort working with Microsoft SQL Server and migration projects.
- Proficiency in at least one major language: Python, Scala, or PySpark.
- Hands-on experience with building ETL/ELT pipelines and data ingestion frameworks.
- Strong understanding of data modelling, performance optimization, and scalable architecture.
- Experience maintaining and refactoring complex data systems.
- Excellent communication skills and ability to collaborate across teams.
Nice-to-have:
- Experience with CI/CD for data pipelines and infrastructure-as-code.
- Background in automotive / motors / industrial domain.
- Experience mentoring others or contributing to team growth.
Required languages
| English | B2 - Upper Intermediate |
| Ukrainian | B2 - Upper Intermediate |