Data Engineer Offline

REQUIREMENTS

Our ideal candidate would have experience with all of the below technologies, or most of them and is willing to learn and work with others.

 

- Spark

- Python

- Databricks

- Airflow

- Kafka

- AWS

 

 

NICE TO HAVE

- Experience building and optimizing streaming data pipelines coming from event-driven applications

- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

- Advanced working SQL knowledge and experience working with relational databases

- A successful history of manipulating, processing and extracting value from large disconnected datasets.

- Experience supporting and working with cross-functional teams in a dynamic environment.

- Experience with big data tools: Databricks, Spark, Kafka, etc.

- Experience with relational SQL and NoSQL databases, including Postgres/MySQL and MongoDB.

- Experience with data pipeline and workflow management tools: Airflow

- Experience with AWS cloud services: EC2, ECS, MSK, RDS, Redshift

- Experience with stream-processing systems: Spark-Streaming, ksqlDB, Kafka streams etc.

 

WE OFFER

- Competitive salary

- Dynamic environment of a fast growing company

- Bonuses

- Health insurance

 

RESPONSIBILITIES

- Create and maintain optimal data pipeline architecture,

- Assemble large, complex data sets that meet functional / non-functional business requirements.

- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS ‘big data’ and SQL technologies.

- Build analytics tools that utilize the data pipeline to provide actionable insights into customer behaviour, operational efficiency and other key business performance metrics.

- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

— Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

— Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

— Work with data and analytics experts to strive for greater functionality in our data systems.

 

PROJECT DESCRIPTION

Develop, modernize and optimize a back-end for a portfolio of online games that are currently being used by 100K+ users. You will work with our data & AI team to make these games safe, secure and fair.

The job ad is no longer active
Job unpublished on 18 February 2021

Look at the current jobs Python Kyiv→