Big Data Engineer with Scala and Spark Offline
Requirements:
- Strong knowledge of Scala
- In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams)
- Understanding of the best practices in data quality and quality engineering
- Experience with version control systems, Git in particular ability to quickly learn new tools and technologies
Responsibilities:
- Participate in the design and development of a big data analytics application
- Design, support and continuously enhance the project code base, continuous integration pipeline, etc.
- Write complex ETL processes and frameworks for analytics and data management
- Implement large-scale near real-time streaming data processing pipelines
- Work with a team of industry experts on cutting-edge big data technologies to develop solutions for deployment at massive scale
Will be a plus:
- Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.)
- Experience with Github-based development processes
- Experience with JVM build systems (SBT, Maven, Gradle)
We offer:
- Opportunity to work on bleeding-edge projects
- Work with a highly motivated and dedicated team
- Competitive salary
- Flexible schedule
- Benefits program
- Social package - medical insurance, sports
- Corporate social events
- Professional development opportunities
- Opportunity for long business trips to the US and possibility for relocation
The job ad is no longer active
Job unpublished on
14 December 2020
Look at the current jobs Scala Kyiv→