Regular Data Engineer with on-call support

Project description

The team is responsible for building a group of services, which will inject data from upstream systems into the Analytical Platform, transform, and store. The system will provide access to an already aggregated profile. The goal of the project is to speed up the delivery of all data, enabling fresher data and quicker action. Also, a candidate must be ready for a periodical 12/7 hours of on-call support.

Responsibilities

Collaborate with stakeholders to define Big Data architecture and integration standards.

Design and build data sources and integration points, with a focus on Spark SQL and BigQuery ETL.

Proactively analyze and optimize BigQuery queries and Spark SQL queries to ensure cost-efficiency and speed when processing Big Data workloads.

Manage infrastructure using Terraform, GitLab CI, and GitHub Actions.

Conduct code review, refactoring, and testing to ensure high-quality deliverables.

Provide bugfixes for existing features and ensure system stability.

Skills

Must have

Advanced SQL: Strong proficiency in writing complex queries, window functions, and performance tuning

1+ year with Google Cloud Platform (GCP), specifically:

BigQuery: Deep understanding of partitioning, clustering, and cost optimization.

Dataproc: Experience running and tuning Spark jobs.

1+ year with Apache Kafka (Experience with Confluent Cloud is a plus).

Serialization formats: Avro, Schema Registry.

Apache Airflow: Experience creating DAGs and understanding orchestration logic

2+ years of experience with Java or Python (Both are a plus).

Hands-on experience with cloud-native applications.

Expertise with GitLab CI/CD and GitHub Actions.

Nice to have

Knowledge of Medallion Architecture principles.

Experience with deploying and managing applications in Kubernetes

Docker: Containerization standards.

Logging & Monitoring: Experience with NewRelic or Splunk.

Basic AI Knowledge: Familiarity with AI tools, Prompt Engineering

Working experience with Apache Flink.

Languages

English: B2 Upper Intermediate,

Ukrainian: C1 Advanced

Required languages

English B2 - Upper Intermediate
SQL, GCP, Spark, Apache Kafka, Python, Java, Airflow, GitLab
Published 22 December
13 views
ยท
0 applications
To apply for this and other jobs on Djinni login or signup.
Loading...