Jabil Uzhorod

Joined in 2022
14% answers
Website:
  • · 88 views · 14 applications · 13d

    Senior Backend Engineer (GO/ Distributed Systems)

    Full Remote · Hungary, Poland, Ukraine · 5 years of experience · English - B2
    Project Description We are transforming our fleet management platform and data processing infrastructure from a monolithic application to a modern, event-driven microservices architecture. This includes shifting from a tightly coupled relational database...

    Project Description
    We are transforming our fleet management platform and data processing infrastructure from a monolithic application to a modern, event-driven microservices architecture. This includes shifting from a tightly coupled relational database to a scalable, multi-tenant event stream platform and Kubernetes operators development.

     

    Job Description

    • We are looking for a strong Senior Backend Engineer to help evolve our platform into a multi-tenant, event-driven architecture.
    • You can come from any mature backend background (Java, .NET, Node.js, Rust, Python, etc.), but you must be willing to switch to Go, as our backend services are developed in Go.
       Prior Go experience is not required, but you need to be comfortable ramping up quickly and writing production-grade Go code.
    • The core requirement is strong distributed-systems expertise and hands-on experience with event-streaming technologies such as Pulsar or Kafka.

     

    Experience Level: 5+ years of backend engineering

     

    Responsibilities

    • Design and maintain distributed microservices (Go-based environment).
    • Work with event-streaming systems such as Apache Pulsar or Kafka.
    • Deploy and operate services in Kubernetes across cloud environments.
    • Ensure observability: logs, metrics, tracing, and reliability.
    • Participate in architecture discussions, code reviews, and performance optimization.
    • Collaborate with Data Engineering, Platform, and DevOps teams.

     

    Requirements

    Core Backend Skills

    • 5+ years of backend development in any production language: Java, .NET, Node.js, Python, Rust, Elixir, Ruby, etc.
    • Strong experience with event-driven systems: Apache Pulsar, Kafka or similar.
    • Experience designing and maintaining distributed microservices.
    • Solid understanding of concurrency, scalability, and high-throughput system design.
    • Willingness to adopt Go as the primary language for the project.

       

      Cloud & Deployment

    • Practical experience with Kubernetes operators, CRDs, and Helm.
    • Understanding of cloud CI/CD systems and delivery pipelines (GitHub Actions).

      Data

    • Experience with PostgreSQL and Redis.
    • Understanding of Data Engineering concepts, including ETL/ELT and streaming workflows.

     

    Nice to Have

    • Experience with Go (Golang).
    • Experience with C/C++ (networking, concurrency, high-performance systems).
    • Experience with observability stacks: Prometheus, Grafana, OpenTelemetry.
    • Experience building multi-tenant architectures.
    • Python experience for data workflows.
    More
  • · 49 views · 5 applications · 30d

    Database Engineer

    Full Remote · Ukraine, Poland, Hungary · Product · 5 years of experience · English - None
    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and...

    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and applications.

     

    What You’ll Do

     

    • Design, build, and maintain ETL/ELT pipelines and data workflows (e.g., Azure Data Factory, Databricks, Spark, ClickHouse, Airflow, etc.).
    • Develop and optimize data models, data warehouse/lake/lakehouse schema (partitioning, indexing, clustering, cost/performance tuning, etc.).
    • Build scalable batch and streaming processing jobs (Spark/Databricks, Delta Lake; Kafka/Event Hubs a plus).
    • Ensure data quality, reliability, and observability (tests, monitoring, alerting, SLAs).
    • Implement CI/CD and version control for data assets and pipelines.
    • Secure data and environments (IAM/Entra ID, Key Vault, strong tenancy guarantees, encryption, least privilege).
    • Collaborate with application, analytics, and platform teams to deliver trustworthy, consumable datasets.

     

    Required Qualifications

     

    • ETL or ELT experience required (ADF/Databricks/dbt/Airflow or similar).
    • Big data experience required.
    • Cloud experience required; Azure preferred (Synapse, Data Factory, Databricks, Azure Storage, Event Hubs, etc.).
    • Strong SQL and performance tuning expertise; hands-on with at least one warehouse/lakehouse (Synapse/Snowflake/BigQuery/Redshift or similar).
    • Solid data modeling fundamentals (star/snowflake schemas, normalization/denormalization, CDC, etc.).
    • Experience with CI/CD, Git, and infrastructure automation basics.

     

    Nice to Have

     

    • Streaming pipelines (Kafka, Event Hubs, Kinesis, Pub/Sub) and exactly-once/at-least-once patterns.
    • Orchestration and workflow tools (Airflow, Prefect, Azure Data Factory).
    • Python for data engineering.
    • Data governance, lineage, and security best practices.
    • Infrastructure as Code (Terraform) for data platform provisioning.
    More
Log In or Sign Up to see all posted jobs