Principal Engineer (offline)

Our goal is to revolutionise shipping by creating a suite of comprehensive software solutions for the Maritime industry. Our journey begins now. Over the next couple of years, our teams and squads will build more than 30 products from the ground up. This includes everything from global vessel tracking to vessel performance analysis, crew optimisation and so much more. This is an exciting and challenging opportunity to apply cutting-edge technology to revolutionising an iconic industry.

Our tech stack consists of React and React Native applications communicating using GraphQL to microservices containers orchestrated by Kubernetes. Internally our services use gRPC for communication and achieve high scalability thanks to an Apache Kafka based event driven architecture and persist data to a mix of RDBS and No-SQL databases including PostgresDB, MongoDB, S3 and Elasticsearch. We follow modern CI/CD and agile methodologies to deploy into production multiple times per week.

What’s in it for you?
β€’ Disrupting a century old industry in a startup environment
β€’ Opportunity to grow and develop your core skills
β€’ Work with a diverse multicultural team in an agile environment
β€’ Opportunity to work with latest cutting-edge technologies
β€’ Variety of knowledge sharing and self-development opportunities
β€’ Competitive salary
β€’ State of the art, cool, centrally located offices with tech startup atmosphere
β€’ Experience first-hand the squad-chapter-guild organisation structure, our version of the Spotify model

Data Intelligence Team purpose and scope
Take output from IoT data quality team and makes it ready for consumption by products. The team is accountable for enriching the data by applying models and technical expertise to ensure the data is easily consumed by platform products.
β€’ Consumes and process data output from data quality to help Edge and shore product requirements
β€’ Ensure there is no duplication data process flows between vessel and cloud
β€’ Ensure multiple data sources are aggregated to allow transparent data consumption of similar events by product teams

*Day to Day*
The Data intelligence team is a development area that focuses on:
β€’ Cloud and vessels data streaming pipelines
β€’ Validating that AI models work with real world data
β€’ Implementing AI solution validated by the Leadership Team in production
β€’ Design and development of vessel to and from cloud communication
β€’ Collaborate with Edge and Platform and Data Infrastructure for service deployment on vessel computers

Primary Purpose of the Principal Engineer Role
β€’ To act as the Engineer with oversight for Data Intelligence within the Platform
β€’ To participate into technical leadership meetings and discussions
β€’ Suggest architecture and implementation solutions to the Technical Leadership
β€’ Communicate the agreed architecture decisions to squads and developers and ensure that they are understood and respected
β€’ Ensure that the overall project proceeds without delays according to the Product roadmap which incorporates dependencies
β€’ Align on the product vision with the Head of Product of the respective area
β€’ Lead product alignment with the wider Development pool of the vision agreed with the Head of Product of the respective area
β€’ Merge and normalise multi-source data stream into generalised data set

Requirements
Full hands-on development experience
Proficiency in:
o Java and Golang
o Develop with the latest Java version, build with Gradle and test and deploy your own code into production
o Experience using Kafka technologies (Kafka Streams DSL, Processor API, Kafka Connect, Avro Schema Registry)
o Elasticsearch
o Time-series DB
o Code & systems testing
o RDBMS and NoSQL databases
o Kubernetes and Docker
o Advanced use of git
o Use of Unix/Linux shell commands
o Microservices architecture concepts
o Event driven paradigm
o Evaluating/designing/building data solutions for operations & support (I.e. metrics, tracing, logging)
Understanding:
o Stream processing
o Time series
o Concepts of AI and machine learning pipelines
o Protobuf/gRPC
o Best practices in scaling & monitoring data pipelines

Nice to have
β€’ AWS stack experience
β€’ Ability to perform basic devops tasks
β€’ Python and related data science packages such as pandas/numpy/scikit-learn
β€’ Basic data analysis techniques
β€’ Understanding of statistics

Experience
Demonstrated track record and proficiency in the points below:
β€’ Deliver features autonomously with a high degree of team coordination
β€’ Deliver code based on precise architecture spec as well as without relying on precise architecture spec or requirements
β€’ Automated testing
β€’ Working with CI and GitOps practices
β€’ Delivering code to production
β€’ Maintaining production ready code
β€’ Collaborating in small but fast paced teams
β€’ Event driven architecture and message passing

More about you
β€’ Very good communication skills: good level of English verbal and writing
β€’ Willingness to learn and open mind about new technologies
β€’ Confident to operate in a fast-paced environment
β€’ A collaborative approach and willingness to engage in an environment of active idea sharing
β€’ Ability to learn autonomously
β€’ Excellent all-round communications skills

About Ninety Percent of Everything

Our goal is to revolutionize the Maritime industry by creating a suite of comprehensive software and hardware solutions commercialized under the SaaS model. Over the next couple of years, our squads will build more than 30 products from the ground up. This includes everything from global vessel tracking to vessel performance analysis, crew allocation optimization and so much more. This is an exciting and challenging opportunity to apply cutting-edge technology to revolutionizing an iconic industry.

Our tech stack consists of React, React Native and Flutter applications communicating using GraphQL to microservice containers orchestrated by Kubernetes. The majority of our services is written in Golang with stream processing in Java, they use gRPC for communication, achieve high scalability thanks to Apache Kafka based event driven architecture, persist data to a mix of RDBS and No-SQL databases including PostgresDB, MongoDB, Cassandra, S3 and Elasticsearch. We follow CI/CD and agile methodologies to deploy into production multiple times per week.

Company website:
https://www.90poe.io/

DOU company page:
https://jobs.dou.ua/companies/studio53/

The job ad is no longer active
Job unpublished on 29 April 2020

Look at the current jobs Java Kyiv→