The Data Engineer will play a critical role in making sure that the appropriate data infrastructure is designed, implemented, maintained and optimized at Openpay to support the data needs of different stakeholders, including Business Intelligence and Data Science. You will collaborate with your colleagues in the creation of the data warehouse and will design and optimize scalable data pipelines, which will deliver data used to generate insights and support decisions across the company.
The Responsibilities
You will collaborate with the Team Lead in identifying relevant architectural data needs
and opportunities to build Openpay’s data infrastructure that is aligned with identified business needs.
You will collaborate on designing the optimal data schema for the data warehouse.
You will create, optimize and test scalable ETL scripts (Python or Spark) that implement the transformations defined by business users to produce ready-to-consume data for business reports and data science models.
You will ensure that the data is stored in the data warehouse in the most efficient format that supports efficient and quick querying.
You will collaborate with Data Analysts and Data Scientists to ensure that ETL pipelines are properly tested.
You will help in identifying relevant automation opportunities to bring down the cost and time that are necessary to implement data solutions
The Requirements
A degree in Computer Science or a related quantitative field. Relevant working experience is also acceptable.
Proven working experience (min. 3 years) in designing and optimizing data pipelines and data architecture.
Practical experience with AWS technology stack (especially DMS, Glue, Lambda and S3) is appreciated.
Experience with real-time data processing is an advantage.
Expert knowledge of Python, Spark and SQL.
A proactive team-player and creative thinker who is not afraid to take initiatives, when appropriate.
The job ad is no longer active
Look at the current
jobs
Python
Kyiv→