US product company accelerates enterprise growth through the power of AI. Some of the world’s best brands are leveraging to transform their business, including Lyft, New Relic, Okta, Tanium, and Zoom.

- Take ownership of Kafka clusters, review the current implementation, and work to implement and automate best practices across all teams and further improve observability and monitoring of the clusters;

- Evaluate alternatives to Databricks for running spark jobs and notebooks. Ideally, the alternative environment should run jobs on top of Kubernetes and should include logging, monitoring, and observability;

- Review Hive Metastore and data lake implementation and evangelize best practices and work to improve the performance of the environment in the company;

- Support all teams in the adoption of Kafka and streaming stack as the company makes a big push from batch processing to streaming;

- Standardize implementation of RDS Postgres through best practices, and improving availability, reliability, and visibility of the environment.

- Demonstrated focus on automation;

- Proven track record of partnering with different engineering teams and completing company-wide initiatives in areas such as automation, security, migrations, and monitoring;

- Deep systems engineering and computer architecture background (e.g. Operating Systems and Networking);

- Big data stack, such as Kafka, Hive, Data warehouses, Hadoop (Cloudera, EMR) deployed in AWS, ElasticSearch;

- Use of Terraform and Ansible to automate resources in the public cloud;

- Implementation of monitoring and observability in the stack;

Nice to have
- Experience in managing and performance tuning of Postgres instances.

About Alcor

Your own R&D center from scratch fully backed by our back-office services (Real Estate, RPO, Accounting & Legal, etc.).

Company website:

DOU company page:

Job posted on 5 September 2021
10 views    0 applications

To apply for this and other jobs on Djinni login or signup.