Data Engineer – Python/PySpark
Project Duration: 02.01.2026 – 31.12.2026
Experience: 6+ years
Industry: Banking & Finance
Work Format: Hybrid (mandatory office presence 3 days per week, Kraków, Poland)
About the Project
We are looking for a skilled Data Engineer to work on a banking sector project. The role focuses on building and optimizing data pipelines, working with big data tools, cloud platforms, and collaborating with international teams.
Note: Background check may be required by the client.
Requirements
Must-Have:
- 6–9 years of professional experience in data engineering or similar fields
- Proficiency in Python and experience with PySpark for large-scale, distributed data processing
- Hands-on experience with Microsoft Azure tools: Data Lake, Synapse, Data Factory, and Key Vault
- Strong knowledge of Databricks for big data analysis and workflow orchestration
- Advanced SQL/Oracle skills and understanding of relational database principles
- Experience with data modeling, building data warehouses, and system performance optimization
- Familiarity with CI/CD processes, Git, and general DevOps methodologies
- Strong analytical skills and comfortable working in Agile teams
- Fluent English
Technical Skills
Core Skills:
- Python
- PySpark
- Microsoft Azure
- Data Lake
- Azure Synapse
- Azure Data Factory
- Key Vault
- Databricks
- SQL
- Oracle
- CI/CD
- Git
- DevOps
Required languages
| English | B2 - Upper Intermediate |
Python, PySpark, Microsoft Azure, Data Lake, Azure Synapse, Azure Data Factory, Key Vault, Databricks, SQL, Oracle
Published 12 January
30 views
·
0 applications
📊
$4000-6000
Average salary range of similar jobs in
analytics →
Loading...