Senior Data Engineer for product company (offline)

What’s in it for you?
• Working with latest cutting-edge data streaming technologies
• Helping to disrupt a century old industry in a startup environment
• Having direct influence in how we build our data streaming platform
• Opportunity to grow and develop your core skills
• Deliver a green field system
• Work with a diverse multicultural team in an agile environment
• Variety of knowledge sharing and self-development opportunities
• Competitive salary
• State of the art, cool, centrally located offices with warm atmosphere, which creates good working conditions
• Opportunity to travel to the London office
• Occasional visits to vessels to observe how our software and hardware is being used in the real world
• Experience firsthand the squad-chapter-guild workflow model, our version of the Spotify model


Responsibilities:
• You will work closely with the CTO
• Contribute to data pipeline design, development and monitoring
• Develop Java based Kafka Stream applications
• Supervise production managed Kafka and Elasticsearch clusters
• Conform to the Company wide code standards and tech culture
• Responsible for full lifecycle of data pipelines. Developers will take the services they build from design, through implementation and into production.
• Designing solutions for monitor data pipelines operations with proper alerting

Requirements:
• Full hands-on development experience
• Proficiency in:
o Java and/or Golang
o Develop with the latest Java version, build with Gradle and test and deploy your own code into production
o Experience using Kafka technologies (Kafka Streams DSL, Processor API, Kafka Connect, Avro Schema Registry)
o Elasticsearch
o Code & systems testing
o RDBMS and NoSQL databases
o Kubernetes and Docker
o Advanced use of git
o Use of Unix/Linux shell commands
o Microservices architecture concepts
o Event driven paradigm
o Evaluating/designing/building data solutions for operations & support (I.e. metrics, tracing, logging)
• Understanding:
o Protobuf/gRPC
o Best practices in scaling & monitoring data pipelines

Nice to have:
• AWS stack experience
• Ability to perform basic devops tasks
• Python and related data science packages such as pandas/numpy/scikit-learn
• Basic data analysis techniques
• Understanding of statistics

Experience:
Demonstrated track record and proficiency in the points below:
• Deliver features autonomously with a high degree of team coordination
• Deliver code based on precise architecture spec as well as without relying on precise architecture spec or requirements
• Automated testing
• Working with CI and GitOps practices
• Delivering code to production
• Maintaining production ready code
• Collaborating in small but fast paced teams
• Event driven architecture and message passing
More about you:
• Good level of English
• Willingness to learn and open mind about new technologies
• Confident to operate in a fast-paced environment
• A collaborative approach and willingness to engage in an environment of active idea sharing
• Ability to learn autonomously
• Excellent all-round communications skills

About Freelancer Roman Shevchenko

We help companies to employ right people!

The job ad is no longer active
Job unpublished on 10 May 2020

Look at the current jobs Java Kyiv→