Middle/Senior Big Data Engineer (Visual Platform) Offline

We are searching for a Big Data professional who knows how to organize raw data masterly and to build an accessible and reliable platform for this purpose.

You are likely to love this role if you are:

- full of ideas, inspired to create robust and beautiful solutions

- excited about dynamic projects with challenging tasks

- initiative at choosing technologies and approaches in making a solution from the start

 

Project

Our client is a UK-based web development company that has an ambitious goal to make Internet more human-friendly by improving visual representation of data within the listed structures in the Web. By building a data visualization platform that will process big volumes of user interaction data, the company improves user experience and usability of existing client sites.

 

Our development team works on an efficient visual platform that dramatically improves browsing the web on any device without any plugins or responsive design. The project is in the active development phase, and the main technology stack is based on the WebGL/threejs technology on the frontend and Ruby on the backend.

As a Big Data Engineer, you will be a part of the data engineering team that will design and implement the Data Platform from scratch. Data Platform’s goal is to provide comprehensive analytics based on user interaction data from a client’s web sites like user heatmap click events, user navigation events, conversion events and other system events to improve visualization experience of the web sites.

The customer is extremely open to the professional opinion of the team, so you will have a chance to initiate changes, choose technologies and approaches.

 

Responsibilities

Design and build Data Lake repository to store raw data ingested from different sources

Design Data Warehouse structures and relations using Snowflake

Implement ingestion pipelines to collect data from different sources using cloud-based solutions

Create data processing flows using Spark and cloud services to transform and store raw information from data sources

Apply Data Governance practices to meet regulatory requirements for data security and data quality

 

Requirements:

Solid development experience with Java, Scala, or Python

Solid experience with Spark

Experience with designing or developing Data Lakes and Data Warehouses

Experience with NoSQL and RDBMS technologies

Experience with at least one of cloud providers like AWS, Google Cloud, or Azure

Would be a plus:

Experience with designing or developing Data Lakes and Data Warehouses on AWS

Designing and implementing CI/CD practices

Experience with key-value databases

Experience with Grafana

Experience with AWS Big Data services like EMR, Glue

Experience with serverless solutions like AWS Lambda

The job ad is no longer active
Job unpublished on 17 June 2021

Look at the current jobs Scala Kyiv→