Big Data Engineer (offline)

Do you want to transform the world of digital reading? Join the platform with over 1M titles and 60M documents. Dive into challenges in app scaling, real-time data, and machine learning. We’re looking for Big Data Engineer to make an impact together. Remote + TOP-conditions β€” let’s discuss?

 

 

Responsibilities:

– manage data quality and integrity

– assist with building tools and technology to ensure that downstream customers can have faith in the data they’re consuming

– cross-functional work with the Data Science or Content Engineering teams to troubleshoot, process, or optimize business-critical pipelines

– work with Core Platform to implement better processing jobs for scaling the consumption of streaming data sets

 

Requirements:

– 3+ years of experience in data engineering creating or managing end-to-end data pipelines on large complex datasets.

– proficiency in Spark

– expertise in Scala, and/or Python

– fluency with at least one dialect of SQL

– level of English: Upper-Intermediate

 

As a plus

 

– experience with Streaming platforms, typically based around Kafka

– experience in Terraform, Airflow

– strong grasp of AWS data platform services and their strengths/weaknesses

– strong experience using Jira, Slack, JetBrains IDEs, Git, GitLab, GitHub, Docker, Jenkins

– experience using DataBricks

 

We offer:

– high compensation according to your technical skills

– long-term projects (12m+) with great Customers

– 5-day working week, 8-hour working day, flexible schedule

– democratic management style & friendly environment

– full remote

– annual Paid vacation β€” 20 b/days + unpaid vacation

– paid sick leaves β€” 6 b/days per year

– 12 national holidays

– corporate Perks (external training, English courses, corporate events/team buildings)

– professional and personal growth

The job ad is no longer active

Look at the current jobs Data Engineer →