Data Engineer (Haskell)

Our data platform's purpose is to improve consumer experience and communications activity performance, to mitigate risk, drive product innovation and achieve operational excellence. You will contribute to the implementation and operation of the platform from data collection through storage and on to processing. We seek individual excellence within collaborative teams. Culture is very important and we value working alongside driven people who share our values of:

Growth Mindset

We’re happy to elaborate on what these mean to us, and how these values might help foster your growth;personally and professionally.

What will you do:
The Data Engineer supports the design, implementation and maintenance of data flow channels and data processing systems that support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable and secure manner. He/She focuses on defining optimal solutions to data collection, processing and warehousing. He designs, codes and tests data systems and works on implementing those into the internal infrastructure. He focuses on collecting, parsing, managing, analysing and visualising large sets of data to turn information into insights accessible through multiple platforms.
He is passionate about numbers and works with large data sets. He has a keenness for understanding business processes and resolving challenges in order to provide solutions with the help of clean and interlinked databases and architectures.
Become involved in the internal DevSecOps culture and software engineers guilds, building relationships with other developers and identifying implementing best practices


Identify business needs
Build data processing systems
Experience with Big Data technology stack
Experience in Haskell and Java || Scala || Go etc.
Optimise solution performance using AWS, GCP big data services
Understanding Algorithms and Data Structures
Maintain data processing solutions
SQL or noSQL DB knlowledge
Experience working with data-sets in an academic environment
Demonstrated aptitude for statistics, with a clear interest to learn more

General skills set:

Design batch and streams
Version control systems (Git is mandatory one)
Understanding Linux OS administration basics
Technical/intermediate level of English

Nice to have:

Data Engineer online courses certificates.
Open-source contributions
Understanding the principles of the developing software in an Agile environment
Understanding of RESTful services

Soft skills:

Strong problem solving skills & ability to learn in a fast paced environment
Interest in the latest programming trends such as functional and reactive programming
Strong work ethic
Communication skills


Professional and career development opportunities
Comfortable and fully equipped workplace (a double monitor pc)
Democratic management style
Compensation of sports activities, certification, conferences and seminars
Stable salary and social guarantees

What is the hiring process?
We believe that diverse teams build better products and strive to offer equal opportunity to all applicants. If your application is successful you will move through short phases:

We'll check your CV meets the requirements for the role, which are detailed above.
We'll arrange a 20-minute phone interview with you to talk about you and the role and gauge the relevance of your experience.
We'll propose you short technical challenge (homework for 1-2 hours, deadline 1 week)
We'll invite you to our office to meet our team and discuss your solution for challenge

About ScalHive

2 years
You will work on a wide range of interesting projects with our partners, using Spark, Scala, Akka, Kafka. We aim to build software that is distributed, reactive and scalable. Our experienced, cross-functional Agile teams enable the delivery of entire solutions not just lines of code.

Company website:

This job is no longer active.
Смотреть актуальные вакансии Data Science remote→.

Similar jobs

Data Scientist at TalentsHunter

remote, 03 July

All jobs Data Science remote